2026-03-02 00:00:08.155840 | Job console starting 2026-03-02 00:00:08.185053 | Updating git repos 2026-03-02 00:00:08.681152 | Cloning repos into workspace 2026-03-02 00:00:09.049777 | Restoring repo states 2026-03-02 00:00:09.074478 | Merging changes 2026-03-02 00:00:09.074498 | Checking out repos 2026-03-02 00:00:09.544173 | Preparing playbooks 2026-03-02 00:00:10.751311 | Running Ansible setup 2026-03-02 00:00:18.535525 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-02 00:00:20.626646 | 2026-03-02 00:00:20.626778 | PLAY [Base pre] 2026-03-02 00:00:20.795965 | 2026-03-02 00:00:20.796119 | TASK [Setup log path fact] 2026-03-02 00:00:20.864988 | orchestrator | ok 2026-03-02 00:00:20.954362 | 2026-03-02 00:00:20.954531 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-02 00:00:21.106671 | orchestrator | ok 2026-03-02 00:00:21.130403 | 2026-03-02 00:00:21.130879 | TASK [emit-job-header : Print job information] 2026-03-02 00:00:21.217219 | # Job Information 2026-03-02 00:00:21.217379 | Ansible Version: 2.16.14 2026-03-02 00:00:21.217415 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-02 00:00:21.217450 | Pipeline: periodic-midnight 2026-03-02 00:00:21.217474 | Executor: 521e9411259a 2026-03-02 00:00:21.217548 | Triggered by: https://github.com/osism/testbed 2026-03-02 00:00:21.217581 | Event ID: 0f2d1454de9d48baaf5c2ba128049f38 2026-03-02 00:00:21.225490 | 2026-03-02 00:00:21.225600 | LOOP [emit-job-header : Print node information] 2026-03-02 00:00:21.773262 | orchestrator | ok: 2026-03-02 00:00:21.773556 | orchestrator | # Node Information 2026-03-02 00:00:21.773626 | orchestrator | Inventory Hostname: orchestrator 2026-03-02 00:00:21.773653 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-02 00:00:21.773676 | orchestrator | Username: zuul-testbed03 2026-03-02 00:00:21.773698 | orchestrator | Distro: Debian 12.13 2026-03-02 00:00:21.773722 | orchestrator | Provider: static-testbed 2026-03-02 00:00:21.773744 | orchestrator | Region: 2026-03-02 00:00:21.773765 | orchestrator | Label: testbed-orchestrator 2026-03-02 00:00:21.773786 | orchestrator | Product Name: OpenStack Nova 2026-03-02 00:00:21.773805 | orchestrator | Interface IP: 81.163.193.140 2026-03-02 00:00:21.796426 | 2026-03-02 00:00:21.797975 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-02 00:00:23.440419 | orchestrator -> localhost | changed 2026-03-02 00:00:23.448892 | 2026-03-02 00:00:23.448999 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-02 00:00:26.465953 | orchestrator -> localhost | changed 2026-03-02 00:00:26.516741 | 2026-03-02 00:00:26.516859 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-02 00:00:27.620350 | orchestrator -> localhost | ok 2026-03-02 00:00:27.625976 | 2026-03-02 00:00:27.626063 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-02 00:00:27.653324 | orchestrator | ok 2026-03-02 00:00:27.707008 | orchestrator | included: /var/lib/zuul/builds/a6b4ba0debdc473ca1e410a73926d669/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-02 00:00:27.727886 | 2026-03-02 00:00:27.727982 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-02 00:00:32.435544 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-02 00:00:32.435713 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/a6b4ba0debdc473ca1e410a73926d669/work/a6b4ba0debdc473ca1e410a73926d669_id_rsa 2026-03-02 00:00:32.435745 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/a6b4ba0debdc473ca1e410a73926d669/work/a6b4ba0debdc473ca1e410a73926d669_id_rsa.pub 2026-03-02 00:00:32.435767 | orchestrator -> localhost | The key fingerprint is: 2026-03-02 00:00:32.435790 | orchestrator -> localhost | SHA256:kSec3LDG8LfjTSN771kHrLkTKimMY4Vhq1LRwqeuQxM zuul-build-sshkey 2026-03-02 00:00:32.435809 | orchestrator -> localhost | The key's randomart image is: 2026-03-02 00:00:32.435836 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-02 00:00:32.435855 | orchestrator -> localhost | | . . | 2026-03-02 00:00:32.435873 | orchestrator -> localhost | | * * | 2026-03-02 00:00:32.435890 | orchestrator -> localhost | | . . % + | 2026-03-02 00:00:32.435906 | orchestrator -> localhost | | E+ = . = . . | 2026-03-02 00:00:32.435923 | orchestrator -> localhost | | .* + S + o o | 2026-03-02 00:00:32.435941 | orchestrator -> localhost | | oo o . . *.+ . | 2026-03-02 00:00:32.435958 | orchestrator -> localhost | |.o.. + .o.=. o| 2026-03-02 00:00:32.435974 | orchestrator -> localhost | |o o + o o ...o o.| 2026-03-02 00:00:32.435990 | orchestrator -> localhost | |.+ . . . . .o+ | 2026-03-02 00:00:32.436007 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-02 00:00:32.436050 | orchestrator -> localhost | ok: Runtime: 0:00:03.104197 2026-03-02 00:00:32.441874 | 2026-03-02 00:00:32.441955 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-02 00:00:32.471647 | orchestrator | ok 2026-03-02 00:00:32.498359 | orchestrator | included: /var/lib/zuul/builds/a6b4ba0debdc473ca1e410a73926d669/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-02 00:00:32.546323 | 2026-03-02 00:00:32.546419 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-02 00:00:32.607038 | orchestrator | skipping: Conditional result was False 2026-03-02 00:00:32.626076 | 2026-03-02 00:00:32.626186 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-02 00:00:33.478937 | orchestrator | changed 2026-03-02 00:00:33.484891 | 2026-03-02 00:00:33.484974 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-02 00:00:33.783170 | orchestrator | ok 2026-03-02 00:00:33.788268 | 2026-03-02 00:00:33.788348 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-02 00:00:34.297180 | orchestrator | ok 2026-03-02 00:00:34.303296 | 2026-03-02 00:00:34.303385 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-02 00:00:34.866192 | orchestrator | ok 2026-03-02 00:00:34.876088 | 2026-03-02 00:00:34.876197 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-02 00:00:34.913446 | orchestrator | skipping: Conditional result was False 2026-03-02 00:00:34.919219 | 2026-03-02 00:00:34.919309 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-02 00:00:35.938795 | orchestrator -> localhost | changed 2026-03-02 00:00:35.950925 | 2026-03-02 00:00:35.951021 | TASK [add-build-sshkey : Add back temp key] 2026-03-02 00:00:36.848076 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/a6b4ba0debdc473ca1e410a73926d669/work/a6b4ba0debdc473ca1e410a73926d669_id_rsa (zuul-build-sshkey) 2026-03-02 00:00:36.848283 | orchestrator -> localhost | ok: Runtime: 0:00:00.020095 2026-03-02 00:00:36.854173 | 2026-03-02 00:00:36.854336 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-02 00:00:37.524834 | orchestrator | ok 2026-03-02 00:00:37.534233 | 2026-03-02 00:00:37.534336 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-02 00:00:37.587666 | orchestrator | skipping: Conditional result was False 2026-03-02 00:00:37.730547 | 2026-03-02 00:00:37.730650 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-02 00:00:38.288770 | orchestrator | ok 2026-03-02 00:00:38.309948 | 2026-03-02 00:00:38.310055 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-02 00:00:38.390300 | orchestrator | ok 2026-03-02 00:00:38.398407 | 2026-03-02 00:00:38.398493 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-02 00:00:39.160762 | orchestrator -> localhost | ok 2026-03-02 00:00:39.166633 | 2026-03-02 00:00:39.166715 | TASK [validate-host : Collect information about the host] 2026-03-02 00:00:40.940209 | orchestrator | ok 2026-03-02 00:00:40.975476 | 2026-03-02 00:00:40.975583 | TASK [validate-host : Sanitize hostname] 2026-03-02 00:00:41.115309 | orchestrator | ok 2026-03-02 00:00:41.121798 | 2026-03-02 00:00:41.121882 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-02 00:00:42.673573 | orchestrator -> localhost | changed 2026-03-02 00:00:42.679957 | 2026-03-02 00:00:42.680046 | TASK [validate-host : Collect information about zuul worker] 2026-03-02 00:00:43.365301 | orchestrator | ok 2026-03-02 00:00:43.370563 | 2026-03-02 00:00:43.370649 | TASK [validate-host : Write out all zuul information for each host] 2026-03-02 00:00:44.410414 | orchestrator -> localhost | changed 2026-03-02 00:00:44.419710 | 2026-03-02 00:00:44.419806 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-02 00:00:44.779310 | orchestrator | ok 2026-03-02 00:00:44.794948 | 2026-03-02 00:00:44.795044 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-02 00:02:07.827284 | orchestrator | changed: 2026-03-02 00:02:07.827553 | orchestrator | .d..t...... src/ 2026-03-02 00:02:07.827595 | orchestrator | .d..t...... src/github.com/ 2026-03-02 00:02:07.827632 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-02 00:02:07.827667 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-02 00:02:07.827693 | orchestrator | RedHat.yml 2026-03-02 00:02:07.843457 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-02 00:02:07.843475 | orchestrator | RedHat.yml 2026-03-02 00:02:07.843529 | orchestrator | = 1.53.0"... 2026-03-02 00:02:18.358982 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-02 00:02:18.376616 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-02 00:02:19.121312 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-02 00:02:19.981998 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-02 00:02:20.053525 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-02 00:02:20.627182 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-02 00:02:20.696398 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-02 00:02:21.210925 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-02 00:02:21.211001 | orchestrator | 2026-03-02 00:02:21.211009 | orchestrator | Providers are signed by their developers. 2026-03-02 00:02:21.211015 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-02 00:02:21.211026 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-02 00:02:21.211059 | orchestrator | 2026-03-02 00:02:21.211064 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-02 00:02:21.211069 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-02 00:02:21.211081 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-02 00:02:21.211092 | orchestrator | you run "tofu init" in the future. 2026-03-02 00:02:21.211514 | orchestrator | 2026-03-02 00:02:21.211560 | orchestrator | OpenTofu has been successfully initialized! 2026-03-02 00:02:21.211581 | orchestrator | 2026-03-02 00:02:21.211587 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-02 00:02:21.211591 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-02 00:02:21.211595 | orchestrator | should now work. 2026-03-02 00:02:21.211599 | orchestrator | 2026-03-02 00:02:21.211603 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-02 00:02:21.211607 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-02 00:02:21.211618 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-02 00:02:21.453979 | orchestrator | Created and switched to workspace "ci"! 2026-03-02 00:02:21.454190 | orchestrator | 2026-03-02 00:02:21.454236 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-02 00:02:21.454243 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-02 00:02:21.454248 | orchestrator | for this configuration. 2026-03-02 00:02:21.563316 | orchestrator | ci.auto.tfvars 2026-03-02 00:02:21.571758 | orchestrator | default_custom.tf 2026-03-02 00:02:22.603000 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-02 00:02:23.191616 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-02 00:02:23.737031 | orchestrator | 2026-03-02 00:02:23.737084 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-02 00:02:23.737092 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-02 00:02:23.737097 | orchestrator | + create 2026-03-02 00:02:23.737102 | orchestrator | <= read (data resources) 2026-03-02 00:02:23.737107 | orchestrator | 2026-03-02 00:02:23.737111 | orchestrator | OpenTofu will perform the following actions: 2026-03-02 00:02:23.737115 | orchestrator | 2026-03-02 00:02:23.737119 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-02 00:02:23.737123 | orchestrator | # (config refers to values not yet known) 2026-03-02 00:02:23.737127 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-02 00:02:23.737131 | orchestrator | + checksum = (known after apply) 2026-03-02 00:02:23.737135 | orchestrator | + created_at = (known after apply) 2026-03-02 00:02:23.737139 | orchestrator | + file = (known after apply) 2026-03-02 00:02:23.737143 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737166 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.737171 | orchestrator | + min_disk_gb = (known after apply) 2026-03-02 00:02:23.737175 | orchestrator | + min_ram_mb = (known after apply) 2026-03-02 00:02:23.737179 | orchestrator | + most_recent = true 2026-03-02 00:02:23.737183 | orchestrator | + name = (known after apply) 2026-03-02 00:02:23.737187 | orchestrator | + protected = (known after apply) 2026-03-02 00:02:23.737191 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.737197 | orchestrator | + schema = (known after apply) 2026-03-02 00:02:23.737201 | orchestrator | + size_bytes = (known after apply) 2026-03-02 00:02:23.737205 | orchestrator | + tags = (known after apply) 2026-03-02 00:02:23.737209 | orchestrator | + updated_at = (known after apply) 2026-03-02 00:02:23.737213 | orchestrator | } 2026-03-02 00:02:23.737217 | orchestrator | 2026-03-02 00:02:23.737222 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-02 00:02:23.737226 | orchestrator | # (config refers to values not yet known) 2026-03-02 00:02:23.737230 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-02 00:02:23.737234 | orchestrator | + checksum = (known after apply) 2026-03-02 00:02:23.737238 | orchestrator | + created_at = (known after apply) 2026-03-02 00:02:23.737242 | orchestrator | + file = (known after apply) 2026-03-02 00:02:23.737246 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737250 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.737271 | orchestrator | + min_disk_gb = (known after apply) 2026-03-02 00:02:23.737274 | orchestrator | + min_ram_mb = (known after apply) 2026-03-02 00:02:23.737278 | orchestrator | + most_recent = true 2026-03-02 00:02:23.737282 | orchestrator | + name = (known after apply) 2026-03-02 00:02:23.737286 | orchestrator | + protected = (known after apply) 2026-03-02 00:02:23.737290 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.737294 | orchestrator | + schema = (known after apply) 2026-03-02 00:02:23.737298 | orchestrator | + size_bytes = (known after apply) 2026-03-02 00:02:23.737302 | orchestrator | + tags = (known after apply) 2026-03-02 00:02:23.737306 | orchestrator | + updated_at = (known after apply) 2026-03-02 00:02:23.737309 | orchestrator | } 2026-03-02 00:02:23.737313 | orchestrator | 2026-03-02 00:02:23.737317 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-02 00:02:23.737321 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-02 00:02:23.737325 | orchestrator | + content = (known after apply) 2026-03-02 00:02:23.737329 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-02 00:02:23.737333 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-02 00:02:23.737337 | orchestrator | + content_md5 = (known after apply) 2026-03-02 00:02:23.737341 | orchestrator | + content_sha1 = (known after apply) 2026-03-02 00:02:23.737345 | orchestrator | + content_sha256 = (known after apply) 2026-03-02 00:02:23.737349 | orchestrator | + content_sha512 = (known after apply) 2026-03-02 00:02:23.737353 | orchestrator | + directory_permission = "0777" 2026-03-02 00:02:23.737357 | orchestrator | + file_permission = "0644" 2026-03-02 00:02:23.737361 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-02 00:02:23.737364 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737368 | orchestrator | } 2026-03-02 00:02:23.737372 | orchestrator | 2026-03-02 00:02:23.737376 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-02 00:02:23.737380 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-02 00:02:23.737384 | orchestrator | + content = (known after apply) 2026-03-02 00:02:23.737388 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-02 00:02:23.737392 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-02 00:02:23.737396 | orchestrator | + content_md5 = (known after apply) 2026-03-02 00:02:23.737400 | orchestrator | + content_sha1 = (known after apply) 2026-03-02 00:02:23.737403 | orchestrator | + content_sha256 = (known after apply) 2026-03-02 00:02:23.737407 | orchestrator | + content_sha512 = (known after apply) 2026-03-02 00:02:23.737411 | orchestrator | + directory_permission = "0777" 2026-03-02 00:02:23.737415 | orchestrator | + file_permission = "0644" 2026-03-02 00:02:23.737424 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-02 00:02:23.737428 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737432 | orchestrator | } 2026-03-02 00:02:23.737436 | orchestrator | 2026-03-02 00:02:23.737444 | orchestrator | # local_file.inventory will be created 2026-03-02 00:02:23.737448 | orchestrator | + resource "local_file" "inventory" { 2026-03-02 00:02:23.737452 | orchestrator | + content = (known after apply) 2026-03-02 00:02:23.737456 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-02 00:02:23.737460 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-02 00:02:23.737464 | orchestrator | + content_md5 = (known after apply) 2026-03-02 00:02:23.737468 | orchestrator | + content_sha1 = (known after apply) 2026-03-02 00:02:23.737472 | orchestrator | + content_sha256 = (known after apply) 2026-03-02 00:02:23.737476 | orchestrator | + content_sha512 = (known after apply) 2026-03-02 00:02:23.737479 | orchestrator | + directory_permission = "0777" 2026-03-02 00:02:23.737483 | orchestrator | + file_permission = "0644" 2026-03-02 00:02:23.737487 | orchestrator | + filename = "inventory.ci" 2026-03-02 00:02:23.737491 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737495 | orchestrator | } 2026-03-02 00:02:23.737499 | orchestrator | 2026-03-02 00:02:23.737503 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-02 00:02:23.737506 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-02 00:02:23.737510 | orchestrator | + content = (sensitive value) 2026-03-02 00:02:23.737514 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-02 00:02:23.737518 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-02 00:02:23.737522 | orchestrator | + content_md5 = (known after apply) 2026-03-02 00:02:23.737526 | orchestrator | + content_sha1 = (known after apply) 2026-03-02 00:02:23.737530 | orchestrator | + content_sha256 = (known after apply) 2026-03-02 00:02:23.737541 | orchestrator | + content_sha512 = (known after apply) 2026-03-02 00:02:23.737545 | orchestrator | + directory_permission = "0700" 2026-03-02 00:02:23.737549 | orchestrator | + file_permission = "0600" 2026-03-02 00:02:23.737552 | orchestrator | + filename = ".id_rsa.ci" 2026-03-02 00:02:23.737556 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737560 | orchestrator | } 2026-03-02 00:02:23.737564 | orchestrator | 2026-03-02 00:02:23.737568 | orchestrator | # null_resource.node_semaphore will be created 2026-03-02 00:02:23.737572 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-02 00:02:23.737575 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737579 | orchestrator | } 2026-03-02 00:02:23.737583 | orchestrator | 2026-03-02 00:02:23.737587 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-02 00:02:23.737591 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-02 00:02:23.737595 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.737599 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.737602 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737606 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.737610 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.737614 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-02 00:02:23.737618 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.737622 | orchestrator | + size = 80 2026-03-02 00:02:23.737626 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.737630 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.737633 | orchestrator | } 2026-03-02 00:02:23.737637 | orchestrator | 2026-03-02 00:02:23.737641 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-02 00:02:23.737645 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-02 00:02:23.737649 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.737653 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.737657 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737664 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.737668 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.737672 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-02 00:02:23.737676 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.737680 | orchestrator | + size = 80 2026-03-02 00:02:23.737684 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.737688 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.737692 | orchestrator | } 2026-03-02 00:02:23.737695 | orchestrator | 2026-03-02 00:02:23.737699 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-02 00:02:23.737703 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-02 00:02:23.737707 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.737711 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.737715 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737718 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.737722 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.737726 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-02 00:02:23.737730 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.737734 | orchestrator | + size = 80 2026-03-02 00:02:23.737738 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.737742 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.737745 | orchestrator | } 2026-03-02 00:02:23.737749 | orchestrator | 2026-03-02 00:02:23.737753 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-02 00:02:23.737757 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-02 00:02:23.737761 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.737765 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.737768 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737772 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.737776 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.737780 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-02 00:02:23.737784 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.737788 | orchestrator | + size = 80 2026-03-02 00:02:23.737792 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.737795 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.737799 | orchestrator | } 2026-03-02 00:02:23.737803 | orchestrator | 2026-03-02 00:02:23.737807 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-02 00:02:23.737811 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-02 00:02:23.737815 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.737819 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.737822 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737826 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.737830 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.737836 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-02 00:02:23.737840 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.737844 | orchestrator | + size = 80 2026-03-02 00:02:23.737848 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.737852 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.737856 | orchestrator | } 2026-03-02 00:02:23.737860 | orchestrator | 2026-03-02 00:02:23.737864 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-02 00:02:23.737867 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-02 00:02:23.737871 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.737875 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.737879 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737886 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.737890 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.737894 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-02 00:02:23.737898 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.737902 | orchestrator | + size = 80 2026-03-02 00:02:23.737905 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.737909 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.737913 | orchestrator | } 2026-03-02 00:02:23.737917 | orchestrator | 2026-03-02 00:02:23.737921 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-02 00:02:23.737927 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-02 00:02:23.737932 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.737935 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.737939 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737943 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.737947 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.737951 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-02 00:02:23.737955 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.737958 | orchestrator | + size = 80 2026-03-02 00:02:23.737962 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.737966 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.737970 | orchestrator | } 2026-03-02 00:02:23.737974 | orchestrator | 2026-03-02 00:02:23.737978 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-02 00:02:23.737982 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-02 00:02:23.737985 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.737989 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.737993 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.737997 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.738001 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-02 00:02:23.738005 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738009 | orchestrator | + size = 20 2026-03-02 00:02:23.738026 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.738031 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.738035 | orchestrator | } 2026-03-02 00:02:23.738039 | orchestrator | 2026-03-02 00:02:23.738043 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-02 00:02:23.738047 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-02 00:02:23.738050 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.738054 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.738058 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.738062 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.738066 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-02 00:02:23.738069 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738073 | orchestrator | + size = 20 2026-03-02 00:02:23.738077 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.738081 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.738085 | orchestrator | } 2026-03-02 00:02:23.738088 | orchestrator | 2026-03-02 00:02:23.738092 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-02 00:02:23.738096 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-02 00:02:23.738100 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.738104 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.738108 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.738111 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.738115 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-02 00:02:23.738119 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738126 | orchestrator | + size = 20 2026-03-02 00:02:23.738130 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.738134 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.738138 | orchestrator | } 2026-03-02 00:02:23.738141 | orchestrator | 2026-03-02 00:02:23.738145 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-02 00:02:23.738149 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-02 00:02:23.738153 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.738157 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.738161 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.738164 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.738168 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-02 00:02:23.738172 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738176 | orchestrator | + size = 20 2026-03-02 00:02:23.738180 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.738183 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.738187 | orchestrator | } 2026-03-02 00:02:23.738191 | orchestrator | 2026-03-02 00:02:23.738195 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-02 00:02:23.738199 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-02 00:02:23.738202 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.738206 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.738210 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.738214 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.738218 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-02 00:02:23.738221 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738228 | orchestrator | + size = 20 2026-03-02 00:02:23.738232 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.738235 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.738239 | orchestrator | } 2026-03-02 00:02:23.738243 | orchestrator | 2026-03-02 00:02:23.738247 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-02 00:02:23.738264 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-02 00:02:23.738268 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.738272 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.738276 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.738280 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.738283 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-02 00:02:23.738287 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738291 | orchestrator | + size = 20 2026-03-02 00:02:23.738295 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.738299 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.738302 | orchestrator | } 2026-03-02 00:02:23.738306 | orchestrator | 2026-03-02 00:02:23.738310 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-02 00:02:23.738314 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-02 00:02:23.738318 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.738321 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.738325 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.738332 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.738336 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-02 00:02:23.738340 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738343 | orchestrator | + size = 20 2026-03-02 00:02:23.738347 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.738351 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.738355 | orchestrator | } 2026-03-02 00:02:23.738359 | orchestrator | 2026-03-02 00:02:23.738363 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-02 00:02:23.738366 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-02 00:02:23.738374 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.738378 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.738381 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.738385 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.738389 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-02 00:02:23.738393 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738397 | orchestrator | + size = 20 2026-03-02 00:02:23.738401 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.738404 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.738408 | orchestrator | } 2026-03-02 00:02:23.738412 | orchestrator | 2026-03-02 00:02:23.738416 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-02 00:02:23.738420 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-02 00:02:23.738424 | orchestrator | + attachment = (known after apply) 2026-03-02 00:02:23.738427 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.738431 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.738435 | orchestrator | + metadata = (known after apply) 2026-03-02 00:02:23.738439 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-02 00:02:23.738443 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738446 | orchestrator | + size = 20 2026-03-02 00:02:23.738450 | orchestrator | + volume_retype_policy = "never" 2026-03-02 00:02:23.738454 | orchestrator | + volume_type = "ssd" 2026-03-02 00:02:23.738458 | orchestrator | } 2026-03-02 00:02:23.738462 | orchestrator | 2026-03-02 00:02:23.738465 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-02 00:02:23.738469 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-02 00:02:23.738473 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-02 00:02:23.738477 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-02 00:02:23.738481 | orchestrator | + all_metadata = (known after apply) 2026-03-02 00:02:23.738484 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.738488 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.738492 | orchestrator | + config_drive = true 2026-03-02 00:02:23.738496 | orchestrator | + created = (known after apply) 2026-03-02 00:02:23.738500 | orchestrator | + flavor_id = (known after apply) 2026-03-02 00:02:23.738503 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-02 00:02:23.738507 | orchestrator | + force_delete = false 2026-03-02 00:02:23.738511 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-02 00:02:23.738515 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.738518 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.738522 | orchestrator | + image_name = (known after apply) 2026-03-02 00:02:23.738526 | orchestrator | + key_pair = "testbed" 2026-03-02 00:02:23.738530 | orchestrator | + name = "testbed-manager" 2026-03-02 00:02:23.738533 | orchestrator | + power_state = "active" 2026-03-02 00:02:23.738537 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738541 | orchestrator | + security_groups = (known after apply) 2026-03-02 00:02:23.738545 | orchestrator | + stop_before_destroy = false 2026-03-02 00:02:23.738548 | orchestrator | + updated = (known after apply) 2026-03-02 00:02:23.738552 | orchestrator | + user_data = (sensitive value) 2026-03-02 00:02:23.738556 | orchestrator | 2026-03-02 00:02:23.738560 | orchestrator | + block_device { 2026-03-02 00:02:23.738564 | orchestrator | + boot_index = 0 2026-03-02 00:02:23.738567 | orchestrator | + delete_on_termination = false 2026-03-02 00:02:23.738573 | orchestrator | + destination_type = "volume" 2026-03-02 00:02:23.738577 | orchestrator | + multiattach = false 2026-03-02 00:02:23.738581 | orchestrator | + source_type = "volume" 2026-03-02 00:02:23.738585 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.738592 | orchestrator | } 2026-03-02 00:02:23.738596 | orchestrator | 2026-03-02 00:02:23.738600 | orchestrator | + network { 2026-03-02 00:02:23.738603 | orchestrator | + access_network = false 2026-03-02 00:02:23.738607 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-02 00:02:23.738611 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-02 00:02:23.738615 | orchestrator | + mac = (known after apply) 2026-03-02 00:02:23.738619 | orchestrator | + name = (known after apply) 2026-03-02 00:02:23.738622 | orchestrator | + port = (known after apply) 2026-03-02 00:02:23.738626 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.738630 | orchestrator | } 2026-03-02 00:02:23.738634 | orchestrator | } 2026-03-02 00:02:23.738638 | orchestrator | 2026-03-02 00:02:23.738642 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-02 00:02:23.738645 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-02 00:02:23.738649 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-02 00:02:23.738653 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-02 00:02:23.738657 | orchestrator | + all_metadata = (known after apply) 2026-03-02 00:02:23.738661 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.738664 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.738668 | orchestrator | + config_drive = true 2026-03-02 00:02:23.738672 | orchestrator | + created = (known after apply) 2026-03-02 00:02:23.738676 | orchestrator | + flavor_id = (known after apply) 2026-03-02 00:02:23.738680 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-02 00:02:23.738683 | orchestrator | + force_delete = false 2026-03-02 00:02:23.738687 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-02 00:02:23.738691 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.738695 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.738699 | orchestrator | + image_name = (known after apply) 2026-03-02 00:02:23.738703 | orchestrator | + key_pair = "testbed" 2026-03-02 00:02:23.738706 | orchestrator | + name = "testbed-node-0" 2026-03-02 00:02:23.738710 | orchestrator | + power_state = "active" 2026-03-02 00:02:23.738716 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738720 | orchestrator | + security_groups = (known after apply) 2026-03-02 00:02:23.738724 | orchestrator | + stop_before_destroy = false 2026-03-02 00:02:23.738728 | orchestrator | + updated = (known after apply) 2026-03-02 00:02:23.738732 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-02 00:02:23.738736 | orchestrator | 2026-03-02 00:02:23.738739 | orchestrator | + block_device { 2026-03-02 00:02:23.738743 | orchestrator | + boot_index = 0 2026-03-02 00:02:23.738747 | orchestrator | + delete_on_termination = false 2026-03-02 00:02:23.738751 | orchestrator | + destination_type = "volume" 2026-03-02 00:02:23.738755 | orchestrator | + multiattach = false 2026-03-02 00:02:23.738758 | orchestrator | + source_type = "volume" 2026-03-02 00:02:23.738762 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.738766 | orchestrator | } 2026-03-02 00:02:23.738770 | orchestrator | 2026-03-02 00:02:23.738774 | orchestrator | + network { 2026-03-02 00:02:23.738778 | orchestrator | + access_network = false 2026-03-02 00:02:23.738781 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-02 00:02:23.738785 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-02 00:02:23.738789 | orchestrator | + mac = (known after apply) 2026-03-02 00:02:23.738793 | orchestrator | + name = (known after apply) 2026-03-02 00:02:23.738797 | orchestrator | + port = (known after apply) 2026-03-02 00:02:23.738800 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.738804 | orchestrator | } 2026-03-02 00:02:23.738808 | orchestrator | } 2026-03-02 00:02:23.738812 | orchestrator | 2026-03-02 00:02:23.738816 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-02 00:02:23.738820 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-02 00:02:23.738824 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-02 00:02:23.738833 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-02 00:02:23.738837 | orchestrator | + all_metadata = (known after apply) 2026-03-02 00:02:23.738841 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.738845 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.738849 | orchestrator | + config_drive = true 2026-03-02 00:02:23.738852 | orchestrator | + created = (known after apply) 2026-03-02 00:02:23.738856 | orchestrator | + flavor_id = (known after apply) 2026-03-02 00:02:23.738860 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-02 00:02:23.738864 | orchestrator | + force_delete = false 2026-03-02 00:02:23.738868 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-02 00:02:23.738871 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.738875 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.738879 | orchestrator | + image_name = (known after apply) 2026-03-02 00:02:23.738883 | orchestrator | + key_pair = "testbed" 2026-03-02 00:02:23.738887 | orchestrator | + name = "testbed-node-1" 2026-03-02 00:02:23.738891 | orchestrator | + power_state = "active" 2026-03-02 00:02:23.738894 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.738898 | orchestrator | + security_groups = (known after apply) 2026-03-02 00:02:23.738902 | orchestrator | + stop_before_destroy = false 2026-03-02 00:02:23.738906 | orchestrator | + updated = (known after apply) 2026-03-02 00:02:23.738910 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-02 00:02:23.738913 | orchestrator | 2026-03-02 00:02:23.738917 | orchestrator | + block_device { 2026-03-02 00:02:23.738921 | orchestrator | + boot_index = 0 2026-03-02 00:02:23.738925 | orchestrator | + delete_on_termination = false 2026-03-02 00:02:23.738929 | orchestrator | + destination_type = "volume" 2026-03-02 00:02:23.738932 | orchestrator | + multiattach = false 2026-03-02 00:02:23.738936 | orchestrator | + source_type = "volume" 2026-03-02 00:02:23.738940 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.738944 | orchestrator | } 2026-03-02 00:02:23.738948 | orchestrator | 2026-03-02 00:02:23.738951 | orchestrator | + network { 2026-03-02 00:02:23.738955 | orchestrator | + access_network = false 2026-03-02 00:02:23.738959 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-02 00:02:23.738963 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-02 00:02:23.738967 | orchestrator | + mac = (known after apply) 2026-03-02 00:02:23.738970 | orchestrator | + name = (known after apply) 2026-03-02 00:02:23.738974 | orchestrator | + port = (known after apply) 2026-03-02 00:02:23.738978 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.738982 | orchestrator | } 2026-03-02 00:02:23.738986 | orchestrator | } 2026-03-02 00:02:23.738989 | orchestrator | 2026-03-02 00:02:23.738993 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-02 00:02:23.738997 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-02 00:02:23.739001 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-02 00:02:23.739005 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-02 00:02:23.739009 | orchestrator | + all_metadata = (known after apply) 2026-03-02 00:02:23.739012 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.739019 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.739023 | orchestrator | + config_drive = true 2026-03-02 00:02:23.739027 | orchestrator | + created = (known after apply) 2026-03-02 00:02:23.739031 | orchestrator | + flavor_id = (known after apply) 2026-03-02 00:02:23.739034 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-02 00:02:23.739038 | orchestrator | + force_delete = false 2026-03-02 00:02:23.739042 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-02 00:02:23.739046 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.739050 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.739056 | orchestrator | + image_name = (known after apply) 2026-03-02 00:02:23.739060 | orchestrator | + key_pair = "testbed" 2026-03-02 00:02:23.739064 | orchestrator | + name = "testbed-node-2" 2026-03-02 00:02:23.739068 | orchestrator | + power_state = "active" 2026-03-02 00:02:23.739072 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.739076 | orchestrator | + security_groups = (known after apply) 2026-03-02 00:02:23.739079 | orchestrator | + stop_before_destroy = false 2026-03-02 00:02:23.739083 | orchestrator | + updated = (known after apply) 2026-03-02 00:02:23.739087 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-02 00:02:23.739091 | orchestrator | 2026-03-02 00:02:23.739095 | orchestrator | + block_device { 2026-03-02 00:02:23.739099 | orchestrator | + boot_index = 0 2026-03-02 00:02:23.739102 | orchestrator | + delete_on_termination = false 2026-03-02 00:02:23.739106 | orchestrator | + destination_type = "volume" 2026-03-02 00:02:23.739112 | orchestrator | + multiattach = false 2026-03-02 00:02:23.739116 | orchestrator | + source_type = "volume" 2026-03-02 00:02:23.739120 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.739124 | orchestrator | } 2026-03-02 00:02:23.739128 | orchestrator | 2026-03-02 00:02:23.739131 | orchestrator | + network { 2026-03-02 00:02:23.739135 | orchestrator | + access_network = false 2026-03-02 00:02:23.739139 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-02 00:02:23.739143 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-02 00:02:23.739147 | orchestrator | + mac = (known after apply) 2026-03-02 00:02:23.739150 | orchestrator | + name = (known after apply) 2026-03-02 00:02:23.739154 | orchestrator | + port = (known after apply) 2026-03-02 00:02:23.739158 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.739162 | orchestrator | } 2026-03-02 00:02:23.739166 | orchestrator | } 2026-03-02 00:02:23.739169 | orchestrator | 2026-03-02 00:02:23.739173 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-02 00:02:23.739177 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-02 00:02:23.739181 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-02 00:02:23.739185 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-02 00:02:23.739189 | orchestrator | + all_metadata = (known after apply) 2026-03-02 00:02:23.739192 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.739196 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.739200 | orchestrator | + config_drive = true 2026-03-02 00:02:23.739204 | orchestrator | + created = (known after apply) 2026-03-02 00:02:23.739208 | orchestrator | + flavor_id = (known after apply) 2026-03-02 00:02:23.739211 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-02 00:02:23.739215 | orchestrator | + force_delete = false 2026-03-02 00:02:23.739219 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-02 00:02:23.739223 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.739227 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.739230 | orchestrator | + image_name = (known after apply) 2026-03-02 00:02:23.739234 | orchestrator | + key_pair = "testbed" 2026-03-02 00:02:23.739238 | orchestrator | + name = "testbed-node-3" 2026-03-02 00:02:23.739242 | orchestrator | + power_state = "active" 2026-03-02 00:02:23.739246 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.739249 | orchestrator | + security_groups = (known after apply) 2026-03-02 00:02:23.739290 | orchestrator | + stop_before_destroy = false 2026-03-02 00:02:23.739296 | orchestrator | + updated = (known after apply) 2026-03-02 00:02:23.739302 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-02 00:02:23.739308 | orchestrator | 2026-03-02 00:02:23.739312 | orchestrator | + block_device { 2026-03-02 00:02:23.739318 | orchestrator | + boot_index = 0 2026-03-02 00:02:23.739322 | orchestrator | + delete_on_termination = false 2026-03-02 00:02:23.739326 | orchestrator | + destination_type = "volume" 2026-03-02 00:02:23.739334 | orchestrator | + multiattach = false 2026-03-02 00:02:23.739338 | orchestrator | + source_type = "volume" 2026-03-02 00:02:23.739342 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.739346 | orchestrator | } 2026-03-02 00:02:23.739349 | orchestrator | 2026-03-02 00:02:23.739353 | orchestrator | + network { 2026-03-02 00:02:23.739357 | orchestrator | + access_network = false 2026-03-02 00:02:23.739361 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-02 00:02:23.739365 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-02 00:02:23.739368 | orchestrator | + mac = (known after apply) 2026-03-02 00:02:23.739372 | orchestrator | + name = (known after apply) 2026-03-02 00:02:23.739376 | orchestrator | + port = (known after apply) 2026-03-02 00:02:23.739380 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.739383 | orchestrator | } 2026-03-02 00:02:23.739387 | orchestrator | } 2026-03-02 00:02:23.739391 | orchestrator | 2026-03-02 00:02:23.739395 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-02 00:02:23.739399 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-02 00:02:23.739403 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-02 00:02:23.739406 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-02 00:02:23.739410 | orchestrator | + all_metadata = (known after apply) 2026-03-02 00:02:23.739414 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.739418 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.739422 | orchestrator | + config_drive = true 2026-03-02 00:02:23.739425 | orchestrator | + created = (known after apply) 2026-03-02 00:02:23.739429 | orchestrator | + flavor_id = (known after apply) 2026-03-02 00:02:23.739433 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-02 00:02:23.739437 | orchestrator | + force_delete = false 2026-03-02 00:02:23.739441 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-02 00:02:23.739445 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.739448 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.739452 | orchestrator | + image_name = (known after apply) 2026-03-02 00:02:23.739456 | orchestrator | + key_pair = "testbed" 2026-03-02 00:02:23.739460 | orchestrator | + name = "testbed-node-4" 2026-03-02 00:02:23.739464 | orchestrator | + power_state = "active" 2026-03-02 00:02:23.739467 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.739471 | orchestrator | + security_groups = (known after apply) 2026-03-02 00:02:23.739475 | orchestrator | + stop_before_destroy = false 2026-03-02 00:02:23.739479 | orchestrator | + updated = (known after apply) 2026-03-02 00:02:23.739482 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-02 00:02:23.739486 | orchestrator | 2026-03-02 00:02:23.739490 | orchestrator | + block_device { 2026-03-02 00:02:23.739494 | orchestrator | + boot_index = 0 2026-03-02 00:02:23.739498 | orchestrator | + delete_on_termination = false 2026-03-02 00:02:23.739502 | orchestrator | + destination_type = "volume" 2026-03-02 00:02:23.739505 | orchestrator | + multiattach = false 2026-03-02 00:02:23.739509 | orchestrator | + source_type = "volume" 2026-03-02 00:02:23.739513 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.739517 | orchestrator | } 2026-03-02 00:02:23.739521 | orchestrator | 2026-03-02 00:02:23.739524 | orchestrator | + network { 2026-03-02 00:02:23.739528 | orchestrator | + access_network = false 2026-03-02 00:02:23.739532 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-02 00:02:23.739536 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-02 00:02:23.739540 | orchestrator | + mac = (known after apply) 2026-03-02 00:02:23.739544 | orchestrator | + name = (known after apply) 2026-03-02 00:02:23.739547 | orchestrator | + port = (known after apply) 2026-03-02 00:02:23.739554 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.739558 | orchestrator | } 2026-03-02 00:02:23.739561 | orchestrator | } 2026-03-02 00:02:23.739569 | orchestrator | 2026-03-02 00:02:23.739573 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-02 00:02:23.739577 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-02 00:02:23.739580 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-02 00:02:23.739584 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-02 00:02:23.739588 | orchestrator | + all_metadata = (known after apply) 2026-03-02 00:02:23.739592 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.739596 | orchestrator | + availability_zone = "nova" 2026-03-02 00:02:23.739599 | orchestrator | + config_drive = true 2026-03-02 00:02:23.739603 | orchestrator | + created = (known after apply) 2026-03-02 00:02:23.739607 | orchestrator | + flavor_id = (known after apply) 2026-03-02 00:02:23.739611 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-02 00:02:23.739615 | orchestrator | + force_delete = false 2026-03-02 00:02:23.739621 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-02 00:02:23.739625 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.739629 | orchestrator | + image_id = (known after apply) 2026-03-02 00:02:23.739633 | orchestrator | + image_name = (known after apply) 2026-03-02 00:02:23.739639 | orchestrator | + key_pair = "testbed" 2026-03-02 00:02:23.739645 | orchestrator | + name = "testbed-node-5" 2026-03-02 00:02:23.739651 | orchestrator | + power_state = "active" 2026-03-02 00:02:23.739656 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.739662 | orchestrator | + security_groups = (known after apply) 2026-03-02 00:02:23.739668 | orchestrator | + stop_before_destroy = false 2026-03-02 00:02:23.739679 | orchestrator | + updated = (known after apply) 2026-03-02 00:02:23.739685 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-02 00:02:23.739692 | orchestrator | 2026-03-02 00:02:23.739696 | orchestrator | + block_device { 2026-03-02 00:02:23.739700 | orchestrator | + boot_index = 0 2026-03-02 00:02:23.739704 | orchestrator | + delete_on_termination = false 2026-03-02 00:02:23.739708 | orchestrator | + destination_type = "volume" 2026-03-02 00:02:23.739712 | orchestrator | + multiattach = false 2026-03-02 00:02:23.739715 | orchestrator | + source_type = "volume" 2026-03-02 00:02:23.739719 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.739723 | orchestrator | } 2026-03-02 00:02:23.739727 | orchestrator | 2026-03-02 00:02:23.739731 | orchestrator | + network { 2026-03-02 00:02:23.739734 | orchestrator | + access_network = false 2026-03-02 00:02:23.739738 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-02 00:02:23.739742 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-02 00:02:23.739746 | orchestrator | + mac = (known after apply) 2026-03-02 00:02:23.739750 | orchestrator | + name = (known after apply) 2026-03-02 00:02:23.739753 | orchestrator | + port = (known after apply) 2026-03-02 00:02:23.739757 | orchestrator | + uuid = (known after apply) 2026-03-02 00:02:23.739761 | orchestrator | } 2026-03-02 00:02:23.739765 | orchestrator | } 2026-03-02 00:02:23.739769 | orchestrator | 2026-03-02 00:02:23.739772 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-02 00:02:23.739776 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-02 00:02:23.739780 | orchestrator | + fingerprint = (known after apply) 2026-03-02 00:02:23.739784 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.739788 | orchestrator | + name = "testbed" 2026-03-02 00:02:23.739791 | orchestrator | + private_key = (sensitive value) 2026-03-02 00:02:23.739795 | orchestrator | + public_key = (known after apply) 2026-03-02 00:02:23.739799 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.739803 | orchestrator | + user_id = (known after apply) 2026-03-02 00:02:23.739807 | orchestrator | } 2026-03-02 00:02:23.739810 | orchestrator | 2026-03-02 00:02:23.739814 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-02 00:02:23.739818 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-02 00:02:23.739826 | orchestrator | + device = (known after apply) 2026-03-02 00:02:23.739830 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.739834 | orchestrator | + instance_id = (known after apply) 2026-03-02 00:02:23.739837 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.739841 | orchestrator | + volume_id = (known after apply) 2026-03-02 00:02:23.739845 | orchestrator | } 2026-03-02 00:02:23.739849 | orchestrator | 2026-03-02 00:02:23.739853 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-02 00:02:23.739857 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-02 00:02:23.739861 | orchestrator | + device = (known after apply) 2026-03-02 00:02:23.739864 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.739868 | orchestrator | + instance_id = (known after apply) 2026-03-02 00:02:23.739872 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.739876 | orchestrator | + volume_id = (known after apply) 2026-03-02 00:02:23.739880 | orchestrator | } 2026-03-02 00:02:23.739883 | orchestrator | 2026-03-02 00:02:23.739887 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-02 00:02:23.739891 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-02 00:02:23.739895 | orchestrator | + device = (known after apply) 2026-03-02 00:02:23.739899 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.739903 | orchestrator | + instance_id = (known after apply) 2026-03-02 00:02:23.739906 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.739910 | orchestrator | + volume_id = (known after apply) 2026-03-02 00:02:23.739914 | orchestrator | } 2026-03-02 00:02:23.739918 | orchestrator | 2026-03-02 00:02:23.739922 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-02 00:02:23.739925 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-02 00:02:23.739929 | orchestrator | + device = (known after apply) 2026-03-02 00:02:23.739933 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.739937 | orchestrator | + instance_id = (known after apply) 2026-03-02 00:02:23.739941 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.739944 | orchestrator | + volume_id = (known after apply) 2026-03-02 00:02:23.739948 | orchestrator | } 2026-03-02 00:02:23.739952 | orchestrator | 2026-03-02 00:02:23.739956 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-02 00:02:23.739960 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-02 00:02:23.739964 | orchestrator | + device = (known after apply) 2026-03-02 00:02:23.739968 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.739971 | orchestrator | + instance_id = (known after apply) 2026-03-02 00:02:23.739978 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.739984 | orchestrator | + volume_id = (known after apply) 2026-03-02 00:02:23.739988 | orchestrator | } 2026-03-02 00:02:23.739992 | orchestrator | 2026-03-02 00:02:23.739996 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-02 00:02:23.740000 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-02 00:02:23.740004 | orchestrator | + device = (known after apply) 2026-03-02 00:02:23.740007 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740011 | orchestrator | + instance_id = (known after apply) 2026-03-02 00:02:23.740015 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740019 | orchestrator | + volume_id = (known after apply) 2026-03-02 00:02:23.740023 | orchestrator | } 2026-03-02 00:02:23.740026 | orchestrator | 2026-03-02 00:02:23.740030 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-02 00:02:23.740034 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-02 00:02:23.740038 | orchestrator | + device = (known after apply) 2026-03-02 00:02:23.740042 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740045 | orchestrator | + instance_id = (known after apply) 2026-03-02 00:02:23.740049 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740056 | orchestrator | + volume_id = (known after apply) 2026-03-02 00:02:23.740060 | orchestrator | } 2026-03-02 00:02:23.740063 | orchestrator | 2026-03-02 00:02:23.740067 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-02 00:02:23.740071 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-02 00:02:23.740075 | orchestrator | + device = (known after apply) 2026-03-02 00:02:23.740079 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740082 | orchestrator | + instance_id = (known after apply) 2026-03-02 00:02:23.740086 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740090 | orchestrator | + volume_id = (known after apply) 2026-03-02 00:02:23.740094 | orchestrator | } 2026-03-02 00:02:23.740098 | orchestrator | 2026-03-02 00:02:23.740101 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-02 00:02:23.740105 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-02 00:02:23.740109 | orchestrator | + device = (known after apply) 2026-03-02 00:02:23.740113 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740117 | orchestrator | + instance_id = (known after apply) 2026-03-02 00:02:23.740120 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740124 | orchestrator | + volume_id = (known after apply) 2026-03-02 00:02:23.740128 | orchestrator | } 2026-03-02 00:02:23.740132 | orchestrator | 2026-03-02 00:02:23.740136 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-02 00:02:23.740140 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-02 00:02:23.740144 | orchestrator | + fixed_ip = (known after apply) 2026-03-02 00:02:23.740148 | orchestrator | + floating_ip = (known after apply) 2026-03-02 00:02:23.740152 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740156 | orchestrator | + port_id = (known after apply) 2026-03-02 00:02:23.740159 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740163 | orchestrator | } 2026-03-02 00:02:23.740167 | orchestrator | 2026-03-02 00:02:23.740171 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-02 00:02:23.740175 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-02 00:02:23.740178 | orchestrator | + address = (known after apply) 2026-03-02 00:02:23.740182 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.740186 | orchestrator | + dns_domain = (known after apply) 2026-03-02 00:02:23.740190 | orchestrator | + dns_name = (known after apply) 2026-03-02 00:02:23.740193 | orchestrator | + fixed_ip = (known after apply) 2026-03-02 00:02:23.740197 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740201 | orchestrator | + pool = "public" 2026-03-02 00:02:23.740205 | orchestrator | + port_id = (known after apply) 2026-03-02 00:02:23.740209 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740212 | orchestrator | + subnet_id = (known after apply) 2026-03-02 00:02:23.740216 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.740220 | orchestrator | } 2026-03-02 00:02:23.740224 | orchestrator | 2026-03-02 00:02:23.740228 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-02 00:02:23.740232 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-02 00:02:23.740236 | orchestrator | + admin_state_up = (known after apply) 2026-03-02 00:02:23.740241 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.740245 | orchestrator | + availability_zone_hints = [ 2026-03-02 00:02:23.740248 | orchestrator | + "nova", 2026-03-02 00:02:23.740266 | orchestrator | ] 2026-03-02 00:02:23.740271 | orchestrator | + dns_domain = (known after apply) 2026-03-02 00:02:23.740275 | orchestrator | + external = (known after apply) 2026-03-02 00:02:23.740278 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740282 | orchestrator | + mtu = (known after apply) 2026-03-02 00:02:23.740286 | orchestrator | + name = "net-testbed-management" 2026-03-02 00:02:23.740290 | orchestrator | + port_security_enabled = (known after apply) 2026-03-02 00:02:23.740297 | orchestrator | + qos_policy_id = (known after apply) 2026-03-02 00:02:23.740300 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740304 | orchestrator | + shared = (known after apply) 2026-03-02 00:02:23.740308 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.740312 | orchestrator | + transparent_vlan = (known after apply) 2026-03-02 00:02:23.740316 | orchestrator | 2026-03-02 00:02:23.740320 | orchestrator | + segments (known after apply) 2026-03-02 00:02:23.740323 | orchestrator | } 2026-03-02 00:02:23.740327 | orchestrator | 2026-03-02 00:02:23.740331 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-02 00:02:23.740335 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-02 00:02:23.740339 | orchestrator | + admin_state_up = (known after apply) 2026-03-02 00:02:23.740342 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-02 00:02:23.740346 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-02 00:02:23.740353 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.740357 | orchestrator | + device_id = (known after apply) 2026-03-02 00:02:23.740361 | orchestrator | + device_owner = (known after apply) 2026-03-02 00:02:23.740365 | orchestrator | + dns_assignment = (known after apply) 2026-03-02 00:02:23.740369 | orchestrator | + dns_name = (known after apply) 2026-03-02 00:02:23.740377 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740381 | orchestrator | + mac_address = (known after apply) 2026-03-02 00:02:23.740385 | orchestrator | + network_id = (known after apply) 2026-03-02 00:02:23.740389 | orchestrator | + port_security_enabled = (known after apply) 2026-03-02 00:02:23.740393 | orchestrator | + qos_policy_id = (known after apply) 2026-03-02 00:02:23.740396 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740400 | orchestrator | + security_group_ids = (known after apply) 2026-03-02 00:02:23.740404 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.740408 | orchestrator | 2026-03-02 00:02:23.740412 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740416 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-02 00:02:23.740420 | orchestrator | } 2026-03-02 00:02:23.740423 | orchestrator | 2026-03-02 00:02:23.740427 | orchestrator | + binding (known after apply) 2026-03-02 00:02:23.740431 | orchestrator | 2026-03-02 00:02:23.740435 | orchestrator | + fixed_ip { 2026-03-02 00:02:23.740439 | orchestrator | + ip_address = "192.168.16.5" 2026-03-02 00:02:23.740443 | orchestrator | + subnet_id = (known after apply) 2026-03-02 00:02:23.740447 | orchestrator | } 2026-03-02 00:02:23.740450 | orchestrator | } 2026-03-02 00:02:23.740454 | orchestrator | 2026-03-02 00:02:23.740458 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-02 00:02:23.740462 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-02 00:02:23.740466 | orchestrator | + admin_state_up = (known after apply) 2026-03-02 00:02:23.740470 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-02 00:02:23.740474 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-02 00:02:23.740477 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.740481 | orchestrator | + device_id = (known after apply) 2026-03-02 00:02:23.740485 | orchestrator | + device_owner = (known after apply) 2026-03-02 00:02:23.740489 | orchestrator | + dns_assignment = (known after apply) 2026-03-02 00:02:23.740493 | orchestrator | + dns_name = (known after apply) 2026-03-02 00:02:23.740497 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740500 | orchestrator | + mac_address = (known after apply) 2026-03-02 00:02:23.740504 | orchestrator | + network_id = (known after apply) 2026-03-02 00:02:23.740508 | orchestrator | + port_security_enabled = (known after apply) 2026-03-02 00:02:23.740512 | orchestrator | + qos_policy_id = (known after apply) 2026-03-02 00:02:23.740516 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740522 | orchestrator | + security_group_ids = (known after apply) 2026-03-02 00:02:23.740526 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.740530 | orchestrator | 2026-03-02 00:02:23.740534 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740538 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-02 00:02:23.740541 | orchestrator | } 2026-03-02 00:02:23.740545 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740549 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-02 00:02:23.740553 | orchestrator | } 2026-03-02 00:02:23.740557 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740561 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-02 00:02:23.740565 | orchestrator | } 2026-03-02 00:02:23.740568 | orchestrator | 2026-03-02 00:02:23.740572 | orchestrator | + binding (known after apply) 2026-03-02 00:02:23.740576 | orchestrator | 2026-03-02 00:02:23.740580 | orchestrator | + fixed_ip { 2026-03-02 00:02:23.740584 | orchestrator | + ip_address = "192.168.16.10" 2026-03-02 00:02:23.740587 | orchestrator | + subnet_id = (known after apply) 2026-03-02 00:02:23.740591 | orchestrator | } 2026-03-02 00:02:23.740595 | orchestrator | } 2026-03-02 00:02:23.740599 | orchestrator | 2026-03-02 00:02:23.740603 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-02 00:02:23.740607 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-02 00:02:23.740610 | orchestrator | + admin_state_up = (known after apply) 2026-03-02 00:02:23.740614 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-02 00:02:23.740618 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-02 00:02:23.740622 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.740626 | orchestrator | + device_id = (known after apply) 2026-03-02 00:02:23.740630 | orchestrator | + device_owner = (known after apply) 2026-03-02 00:02:23.740634 | orchestrator | + dns_assignment = (known after apply) 2026-03-02 00:02:23.740637 | orchestrator | + dns_name = (known after apply) 2026-03-02 00:02:23.740641 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740645 | orchestrator | + mac_address = (known after apply) 2026-03-02 00:02:23.740649 | orchestrator | + network_id = (known after apply) 2026-03-02 00:02:23.740653 | orchestrator | + port_security_enabled = (known after apply) 2026-03-02 00:02:23.740657 | orchestrator | + qos_policy_id = (known after apply) 2026-03-02 00:02:23.740660 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740664 | orchestrator | + security_group_ids = (known after apply) 2026-03-02 00:02:23.740668 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.740673 | orchestrator | 2026-03-02 00:02:23.740679 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740686 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-02 00:02:23.740693 | orchestrator | } 2026-03-02 00:02:23.740697 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740701 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-02 00:02:23.740705 | orchestrator | } 2026-03-02 00:02:23.740709 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740713 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-02 00:02:23.740716 | orchestrator | } 2026-03-02 00:02:23.740720 | orchestrator | 2026-03-02 00:02:23.740724 | orchestrator | + binding (known after apply) 2026-03-02 00:02:23.740728 | orchestrator | 2026-03-02 00:02:23.740732 | orchestrator | + fixed_ip { 2026-03-02 00:02:23.740736 | orchestrator | + ip_address = "192.168.16.11" 2026-03-02 00:02:23.740740 | orchestrator | + subnet_id = (known after apply) 2026-03-02 00:02:23.740744 | orchestrator | } 2026-03-02 00:02:23.740747 | orchestrator | } 2026-03-02 00:02:23.740751 | orchestrator | 2026-03-02 00:02:23.740755 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-02 00:02:23.740759 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-02 00:02:23.740763 | orchestrator | + admin_state_up = (known after apply) 2026-03-02 00:02:23.740767 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-02 00:02:23.740771 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-02 00:02:23.740775 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.740782 | orchestrator | + device_id = (known after apply) 2026-03-02 00:02:23.740786 | orchestrator | + device_owner = (known after apply) 2026-03-02 00:02:23.740790 | orchestrator | + dns_assignment = (known after apply) 2026-03-02 00:02:23.740794 | orchestrator | + dns_name = (known after apply) 2026-03-02 00:02:23.740800 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740806 | orchestrator | + mac_address = (known after apply) 2026-03-02 00:02:23.740810 | orchestrator | + network_id = (known after apply) 2026-03-02 00:02:23.740815 | orchestrator | + port_security_enabled = (known after apply) 2026-03-02 00:02:23.740819 | orchestrator | + qos_policy_id = (known after apply) 2026-03-02 00:02:23.740823 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740826 | orchestrator | + security_group_ids = (known after apply) 2026-03-02 00:02:23.740830 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.740834 | orchestrator | 2026-03-02 00:02:23.740838 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740842 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-02 00:02:23.740846 | orchestrator | } 2026-03-02 00:02:23.740850 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740854 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-02 00:02:23.740857 | orchestrator | } 2026-03-02 00:02:23.740861 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740865 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-02 00:02:23.740869 | orchestrator | } 2026-03-02 00:02:23.740873 | orchestrator | 2026-03-02 00:02:23.740877 | orchestrator | + binding (known after apply) 2026-03-02 00:02:23.740881 | orchestrator | 2026-03-02 00:02:23.740885 | orchestrator | + fixed_ip { 2026-03-02 00:02:23.740888 | orchestrator | + ip_address = "192.168.16.12" 2026-03-02 00:02:23.740892 | orchestrator | + subnet_id = (known after apply) 2026-03-02 00:02:23.740896 | orchestrator | } 2026-03-02 00:02:23.740900 | orchestrator | } 2026-03-02 00:02:23.740904 | orchestrator | 2026-03-02 00:02:23.740908 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-02 00:02:23.740912 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-02 00:02:23.740915 | orchestrator | + admin_state_up = (known after apply) 2026-03-02 00:02:23.740919 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-02 00:02:23.740923 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-02 00:02:23.740927 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.740931 | orchestrator | + device_id = (known after apply) 2026-03-02 00:02:23.740935 | orchestrator | + device_owner = (known after apply) 2026-03-02 00:02:23.740938 | orchestrator | + dns_assignment = (known after apply) 2026-03-02 00:02:23.740942 | orchestrator | + dns_name = (known after apply) 2026-03-02 00:02:23.740946 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.740950 | orchestrator | + mac_address = (known after apply) 2026-03-02 00:02:23.740954 | orchestrator | + network_id = (known after apply) 2026-03-02 00:02:23.740958 | orchestrator | + port_security_enabled = (known after apply) 2026-03-02 00:02:23.740961 | orchestrator | + qos_policy_id = (known after apply) 2026-03-02 00:02:23.740965 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.740969 | orchestrator | + security_group_ids = (known after apply) 2026-03-02 00:02:23.740973 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.740977 | orchestrator | 2026-03-02 00:02:23.740981 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740985 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-02 00:02:23.740989 | orchestrator | } 2026-03-02 00:02:23.740993 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.740996 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-02 00:02:23.741000 | orchestrator | } 2026-03-02 00:02:23.741004 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.741008 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-02 00:02:23.741012 | orchestrator | } 2026-03-02 00:02:23.741016 | orchestrator | 2026-03-02 00:02:23.741024 | orchestrator | + binding (known after apply) 2026-03-02 00:02:23.741028 | orchestrator | 2026-03-02 00:02:23.741032 | orchestrator | + fixed_ip { 2026-03-02 00:02:23.741036 | orchestrator | + ip_address = "192.168.16.13" 2026-03-02 00:02:23.741040 | orchestrator | + subnet_id = (known after apply) 2026-03-02 00:02:23.741043 | orchestrator | } 2026-03-02 00:02:23.741047 | orchestrator | } 2026-03-02 00:02:23.741051 | orchestrator | 2026-03-02 00:02:23.741055 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-02 00:02:23.741059 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-02 00:02:23.741063 | orchestrator | + admin_state_up = (known after apply) 2026-03-02 00:02:23.741066 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-02 00:02:23.741070 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-02 00:02:23.741074 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.741078 | orchestrator | + device_id = (known after apply) 2026-03-02 00:02:23.741082 | orchestrator | + device_owner = (known after apply) 2026-03-02 00:02:23.741086 | orchestrator | + dns_assignment = (known after apply) 2026-03-02 00:02:23.741090 | orchestrator | + dns_name = (known after apply) 2026-03-02 00:02:23.741094 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741097 | orchestrator | + mac_address = (known after apply) 2026-03-02 00:02:23.741101 | orchestrator | + network_id = (known after apply) 2026-03-02 00:02:23.741105 | orchestrator | + port_security_enabled = (known after apply) 2026-03-02 00:02:23.741109 | orchestrator | + qos_policy_id = (known after apply) 2026-03-02 00:02:23.741113 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741117 | orchestrator | + security_group_ids = (known after apply) 2026-03-02 00:02:23.741120 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.741125 | orchestrator | 2026-03-02 00:02:23.741129 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.741132 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-02 00:02:23.741136 | orchestrator | } 2026-03-02 00:02:23.741140 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.741144 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-02 00:02:23.741148 | orchestrator | } 2026-03-02 00:02:23.741152 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.741156 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-02 00:02:23.741159 | orchestrator | } 2026-03-02 00:02:23.741163 | orchestrator | 2026-03-02 00:02:23.741167 | orchestrator | + binding (known after apply) 2026-03-02 00:02:23.741171 | orchestrator | 2026-03-02 00:02:23.741175 | orchestrator | + fixed_ip { 2026-03-02 00:02:23.741179 | orchestrator | + ip_address = "192.168.16.14" 2026-03-02 00:02:23.741182 | orchestrator | + subnet_id = (known after apply) 2026-03-02 00:02:23.741186 | orchestrator | } 2026-03-02 00:02:23.741190 | orchestrator | } 2026-03-02 00:02:23.741194 | orchestrator | 2026-03-02 00:02:23.741198 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-02 00:02:23.741202 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-02 00:02:23.741205 | orchestrator | + admin_state_up = (known after apply) 2026-03-02 00:02:23.741209 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-02 00:02:23.741213 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-02 00:02:23.741217 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.741221 | orchestrator | + device_id = (known after apply) 2026-03-02 00:02:23.741225 | orchestrator | + device_owner = (known after apply) 2026-03-02 00:02:23.741231 | orchestrator | + dns_assignment = (known after apply) 2026-03-02 00:02:23.741235 | orchestrator | + dns_name = (known after apply) 2026-03-02 00:02:23.741239 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741243 | orchestrator | + mac_address = (known after apply) 2026-03-02 00:02:23.741247 | orchestrator | + network_id = (known after apply) 2026-03-02 00:02:23.741263 | orchestrator | + port_security_enabled = (known after apply) 2026-03-02 00:02:23.741267 | orchestrator | + qos_policy_id = (known after apply) 2026-03-02 00:02:23.741274 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741278 | orchestrator | + security_group_ids = (known after apply) 2026-03-02 00:02:23.741282 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.741286 | orchestrator | 2026-03-02 00:02:23.741290 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.741294 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-02 00:02:23.741297 | orchestrator | } 2026-03-02 00:02:23.741301 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.741305 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-02 00:02:23.741309 | orchestrator | } 2026-03-02 00:02:23.741313 | orchestrator | + allowed_address_pairs { 2026-03-02 00:02:23.741317 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-02 00:02:23.741321 | orchestrator | } 2026-03-02 00:02:23.741324 | orchestrator | 2026-03-02 00:02:23.741331 | orchestrator | + binding (known after apply) 2026-03-02 00:02:23.741335 | orchestrator | 2026-03-02 00:02:23.741339 | orchestrator | + fixed_ip { 2026-03-02 00:02:23.741343 | orchestrator | + ip_address = "192.168.16.15" 2026-03-02 00:02:23.741347 | orchestrator | + subnet_id = (known after apply) 2026-03-02 00:02:23.741350 | orchestrator | } 2026-03-02 00:02:23.741354 | orchestrator | } 2026-03-02 00:02:23.741358 | orchestrator | 2026-03-02 00:02:23.741362 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-02 00:02:23.741366 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-02 00:02:23.741370 | orchestrator | + force_destroy = false 2026-03-02 00:02:23.741373 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741377 | orchestrator | + port_id = (known after apply) 2026-03-02 00:02:23.741381 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741385 | orchestrator | + router_id = (known after apply) 2026-03-02 00:02:23.741389 | orchestrator | + subnet_id = (known after apply) 2026-03-02 00:02:23.741393 | orchestrator | } 2026-03-02 00:02:23.741397 | orchestrator | 2026-03-02 00:02:23.741400 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-02 00:02:23.741404 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-02 00:02:23.741408 | orchestrator | + admin_state_up = (known after apply) 2026-03-02 00:02:23.741412 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.741416 | orchestrator | + availability_zone_hints = [ 2026-03-02 00:02:23.741420 | orchestrator | + "nova", 2026-03-02 00:02:23.741424 | orchestrator | ] 2026-03-02 00:02:23.741428 | orchestrator | + distributed = (known after apply) 2026-03-02 00:02:23.741431 | orchestrator | + enable_snat = (known after apply) 2026-03-02 00:02:23.741435 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-02 00:02:23.741439 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-02 00:02:23.741443 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741447 | orchestrator | + name = "testbed" 2026-03-02 00:02:23.741451 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741455 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.741459 | orchestrator | 2026-03-02 00:02:23.741462 | orchestrator | + external_fixed_ip (known after apply) 2026-03-02 00:02:23.741466 | orchestrator | } 2026-03-02 00:02:23.741470 | orchestrator | 2026-03-02 00:02:23.741474 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-02 00:02:23.741478 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-02 00:02:23.741482 | orchestrator | + description = "ssh" 2026-03-02 00:02:23.741486 | orchestrator | + direction = "ingress" 2026-03-02 00:02:23.741490 | orchestrator | + ethertype = "IPv4" 2026-03-02 00:02:23.741493 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741497 | orchestrator | + port_range_max = 22 2026-03-02 00:02:23.741501 | orchestrator | + port_range_min = 22 2026-03-02 00:02:23.741505 | orchestrator | + protocol = "tcp" 2026-03-02 00:02:23.741509 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741516 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-02 00:02:23.741520 | orchestrator | + remote_group_id = (known after apply) 2026-03-02 00:02:23.741523 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-02 00:02:23.741527 | orchestrator | + security_group_id = (known after apply) 2026-03-02 00:02:23.741531 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.741535 | orchestrator | } 2026-03-02 00:02:23.741539 | orchestrator | 2026-03-02 00:02:23.741543 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-02 00:02:23.741547 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-02 00:02:23.741550 | orchestrator | + description = "wireguard" 2026-03-02 00:02:23.741554 | orchestrator | + direction = "ingress" 2026-03-02 00:02:23.741558 | orchestrator | + ethertype = "IPv4" 2026-03-02 00:02:23.741562 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741566 | orchestrator | + port_range_max = 51820 2026-03-02 00:02:23.741570 | orchestrator | + port_range_min = 51820 2026-03-02 00:02:23.741574 | orchestrator | + protocol = "udp" 2026-03-02 00:02:23.741577 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741581 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-02 00:02:23.741585 | orchestrator | + remote_group_id = (known after apply) 2026-03-02 00:02:23.741589 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-02 00:02:23.741593 | orchestrator | + security_group_id = (known after apply) 2026-03-02 00:02:23.741597 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.741601 | orchestrator | } 2026-03-02 00:02:23.741604 | orchestrator | 2026-03-02 00:02:23.741608 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-02 00:02:23.741612 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-02 00:02:23.741616 | orchestrator | + direction = "ingress" 2026-03-02 00:02:23.741620 | orchestrator | + ethertype = "IPv4" 2026-03-02 00:02:23.741624 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741628 | orchestrator | + protocol = "tcp" 2026-03-02 00:02:23.741634 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741638 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-02 00:02:23.741642 | orchestrator | + remote_group_id = (known after apply) 2026-03-02 00:02:23.741646 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-02 00:02:23.741649 | orchestrator | + security_group_id = (known after apply) 2026-03-02 00:02:23.741653 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.741657 | orchestrator | } 2026-03-02 00:02:23.741661 | orchestrator | 2026-03-02 00:02:23.741665 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-02 00:02:23.741669 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-02 00:02:23.741673 | orchestrator | + direction = "ingress" 2026-03-02 00:02:23.741677 | orchestrator | + ethertype = "IPv4" 2026-03-02 00:02:23.741680 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741684 | orchestrator | + protocol = "udp" 2026-03-02 00:02:23.741688 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741692 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-02 00:02:23.741696 | orchestrator | + remote_group_id = (known after apply) 2026-03-02 00:02:23.741700 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-02 00:02:23.741703 | orchestrator | + security_group_id = (known after apply) 2026-03-02 00:02:23.741707 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.741711 | orchestrator | } 2026-03-02 00:02:23.741715 | orchestrator | 2026-03-02 00:02:23.741719 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-02 00:02:23.741726 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-02 00:02:23.741730 | orchestrator | + direction = "ingress" 2026-03-02 00:02:23.741734 | orchestrator | + ethertype = "IPv4" 2026-03-02 00:02:23.741737 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741741 | orchestrator | + protocol = "icmp" 2026-03-02 00:02:23.741745 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741749 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-02 00:02:23.741753 | orchestrator | + remote_group_id = (known after apply) 2026-03-02 00:02:23.741757 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-02 00:02:23.741761 | orchestrator | + security_group_id = (known after apply) 2026-03-02 00:02:23.741764 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.741768 | orchestrator | } 2026-03-02 00:02:23.741772 | orchestrator | 2026-03-02 00:02:23.741776 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-02 00:02:23.741780 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-02 00:02:23.741783 | orchestrator | + direction = "ingress" 2026-03-02 00:02:23.741787 | orchestrator | + ethertype = "IPv4" 2026-03-02 00:02:23.741791 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741795 | orchestrator | + protocol = "tcp" 2026-03-02 00:02:23.741799 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741803 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-02 00:02:23.741809 | orchestrator | + remote_group_id = (known after apply) 2026-03-02 00:02:23.741813 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-02 00:02:23.741817 | orchestrator | + security_group_id = (known after apply) 2026-03-02 00:02:23.741821 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.741825 | orchestrator | } 2026-03-02 00:02:23.741829 | orchestrator | 2026-03-02 00:02:23.741833 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-02 00:02:23.741837 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-02 00:02:23.741841 | orchestrator | + direction = "ingress" 2026-03-02 00:02:23.741845 | orchestrator | + ethertype = "IPv4" 2026-03-02 00:02:23.741849 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741852 | orchestrator | + protocol = "udp" 2026-03-02 00:02:23.741856 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741860 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-02 00:02:23.741864 | orchestrator | + remote_group_id = (known after apply) 2026-03-02 00:02:23.741868 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-02 00:02:23.741872 | orchestrator | + security_group_id = (known after apply) 2026-03-02 00:02:23.741875 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.741879 | orchestrator | } 2026-03-02 00:02:23.741883 | orchestrator | 2026-03-02 00:02:23.741887 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-02 00:02:23.741891 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-02 00:02:23.741895 | orchestrator | + direction = "ingress" 2026-03-02 00:02:23.741901 | orchestrator | + ethertype = "IPv4" 2026-03-02 00:02:23.741905 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741909 | orchestrator | + protocol = "icmp" 2026-03-02 00:02:23.741912 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741916 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-02 00:02:23.741920 | orchestrator | + remote_group_id = (known after apply) 2026-03-02 00:02:23.741924 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-02 00:02:23.741928 | orchestrator | + security_group_id = (known after apply) 2026-03-02 00:02:23.741932 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.741938 | orchestrator | } 2026-03-02 00:02:23.741942 | orchestrator | 2026-03-02 00:02:23.741946 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-02 00:02:23.741950 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-02 00:02:23.741954 | orchestrator | + description = "vrrp" 2026-03-02 00:02:23.741957 | orchestrator | + direction = "ingress" 2026-03-02 00:02:23.741961 | orchestrator | + ethertype = "IPv4" 2026-03-02 00:02:23.741965 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.741969 | orchestrator | + protocol = "112" 2026-03-02 00:02:23.741976 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.741984 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-02 00:02:23.741988 | orchestrator | + remote_group_id = (known after apply) 2026-03-02 00:02:23.741992 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-02 00:02:23.741996 | orchestrator | + security_group_id = (known after apply) 2026-03-02 00:02:23.741999 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.742003 | orchestrator | } 2026-03-02 00:02:23.742007 | orchestrator | 2026-03-02 00:02:23.742011 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-02 00:02:23.743083 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-02 00:02:23.743095 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.743100 | orchestrator | + description = "management security group" 2026-03-02 00:02:23.743103 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.743108 | orchestrator | + name = "testbed-management" 2026-03-02 00:02:23.743112 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.743116 | orchestrator | + stateful = (known after apply) 2026-03-02 00:02:23.743119 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.743123 | orchestrator | } 2026-03-02 00:02:23.743128 | orchestrator | 2026-03-02 00:02:23.743136 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-02 00:02:23.743140 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-02 00:02:23.743145 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.743148 | orchestrator | + description = "node security group" 2026-03-02 00:02:23.743152 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.743156 | orchestrator | + name = "testbed-node" 2026-03-02 00:02:23.743160 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.743164 | orchestrator | + stateful = (known after apply) 2026-03-02 00:02:23.743168 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.743172 | orchestrator | } 2026-03-02 00:02:23.743175 | orchestrator | 2026-03-02 00:02:23.743179 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-02 00:02:23.743183 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-02 00:02:23.743187 | orchestrator | + all_tags = (known after apply) 2026-03-02 00:02:23.743191 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-02 00:02:23.743195 | orchestrator | + dns_nameservers = [ 2026-03-02 00:02:23.743199 | orchestrator | + "8.8.8.8", 2026-03-02 00:02:23.743202 | orchestrator | + "9.9.9.9", 2026-03-02 00:02:23.743207 | orchestrator | ] 2026-03-02 00:02:23.743210 | orchestrator | + enable_dhcp = true 2026-03-02 00:02:23.743214 | orchestrator | + gateway_ip = (known after apply) 2026-03-02 00:02:23.743219 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.743223 | orchestrator | + ip_version = 4 2026-03-02 00:02:23.743227 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-02 00:02:23.743231 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-02 00:02:23.743234 | orchestrator | + name = "subnet-testbed-management" 2026-03-02 00:02:23.743239 | orchestrator | + network_id = (known after apply) 2026-03-02 00:02:23.743243 | orchestrator | + no_gateway = false 2026-03-02 00:02:23.743247 | orchestrator | + region = (known after apply) 2026-03-02 00:02:23.743300 | orchestrator | + service_types = (known after apply) 2026-03-02 00:02:23.743321 | orchestrator | + tenant_id = (known after apply) 2026-03-02 00:02:23.743325 | orchestrator | 2026-03-02 00:02:23.743329 | orchestrator | + allocation_pool { 2026-03-02 00:02:23.743333 | orchestrator | + end = "192.168.31.250" 2026-03-02 00:02:23.743337 | orchestrator | + start = "192.168.31.200" 2026-03-02 00:02:23.743341 | orchestrator | } 2026-03-02 00:02:23.743345 | orchestrator | } 2026-03-02 00:02:23.743348 | orchestrator | 2026-03-02 00:02:23.743352 | orchestrator | # terraform_data.image will be created 2026-03-02 00:02:23.743356 | orchestrator | + resource "terraform_data" "image" { 2026-03-02 00:02:23.743360 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.743364 | orchestrator | + input = "Ubuntu 24.04" 2026-03-02 00:02:23.743367 | orchestrator | + output = (known after apply) 2026-03-02 00:02:23.743371 | orchestrator | } 2026-03-02 00:02:23.743375 | orchestrator | 2026-03-02 00:02:23.743379 | orchestrator | # terraform_data.image_node will be created 2026-03-02 00:02:23.743383 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-02 00:02:23.743386 | orchestrator | + id = (known after apply) 2026-03-02 00:02:23.743390 | orchestrator | + input = "Ubuntu 24.04" 2026-03-02 00:02:23.743394 | orchestrator | + output = (known after apply) 2026-03-02 00:02:23.743398 | orchestrator | } 2026-03-02 00:02:23.743402 | orchestrator | 2026-03-02 00:02:23.743405 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-02 00:02:23.743409 | orchestrator | 2026-03-02 00:02:23.743413 | orchestrator | Changes to Outputs: 2026-03-02 00:02:23.743417 | orchestrator | + manager_address = (sensitive value) 2026-03-02 00:02:23.743421 | orchestrator | + private_key = (sensitive value) 2026-03-02 00:02:23.931890 | orchestrator | terraform_data.image: Creating... 2026-03-02 00:02:23.931968 | orchestrator | terraform_data.image_node: Creating... 2026-03-02 00:02:23.931981 | orchestrator | terraform_data.image: Creation complete after 0s [id=896cbb0b-4ec2-2636-20dd-f3bce1900d2b] 2026-03-02 00:02:23.931992 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=e91912ba-827d-65c1-61f3-748278650e46] 2026-03-02 00:02:23.952560 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-02 00:02:23.953099 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-02 00:02:23.967945 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-02 00:02:23.969192 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-02 00:02:23.969326 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-02 00:02:23.969491 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-02 00:02:23.971116 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-02 00:02:23.972133 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-02 00:02:23.975743 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-02 00:02:23.975882 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-02 00:02:24.426291 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-02 00:02:24.434359 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-02 00:02:24.438437 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-02 00:02:24.444504 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-02 00:02:24.524002 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-02 00:02:24.532497 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-02 00:02:25.001683 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=b0f3d231-4cc4-4156-b52b-9bb6a6ed6fd4] 2026-03-02 00:02:25.011060 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-02 00:02:27.673106 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=ac18fc5b-7614-46f9-bf3c-282e02a3d506] 2026-03-02 00:02:27.683120 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-02 00:02:27.730541 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=a8122f95-b81e-4023-b303-8950dd4c9351] 2026-03-02 00:02:28.494463 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=24fb8e5a-509d-4406-a727-cf15b40a450f] 2026-03-02 00:02:28.494559 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-02 00:02:28.494574 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-02 00:02:28.494585 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=34a77e0f-df07-4c87-b046-7d039bca2077] 2026-03-02 00:02:28.494595 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-02 00:02:28.494605 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=5b76853c-a11b-45e9-97a5-74de733f1116] 2026-03-02 00:02:28.494617 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=e7c30f24-07cf-4e73-8c7c-bba1057c8cb7] 2026-03-02 00:02:28.494627 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-02 00:02:28.494637 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-02 00:02:28.494647 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=7341868e-8f6a-460c-870a-5a0cce1fa311] 2026-03-02 00:02:28.494656 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-02 00:02:28.494666 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=b2184af5-6da0-496d-b48a-b0daa217c842] 2026-03-02 00:02:28.494676 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-02 00:02:28.494686 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=3458d56d-fe8a-4fae-86e7-5458fccbe7bb] 2026-03-02 00:02:28.494697 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-02 00:02:28.494706 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=aa8f0102-5212-4305-be50-f5d5421dd449] 2026-03-02 00:02:28.869016 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 1s [id=446e3f0469426ee299a41a671a31f921cf424396] 2026-03-02 00:02:28.870603 | orchestrator | local_file.id_rsa_pub: Creation complete after 1s [id=b6135b460b8e65da5338cb35e3dff96d1a88fe39] 2026-03-02 00:02:28.982493 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=a4d71d84-2b04-488e-a1f8-624b9032b8c2] 2026-03-02 00:02:28.991341 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-02 00:02:31.142219 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8] 2026-03-02 00:02:31.293358 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=71a7a740-2bf8-4a84-80fa-758afac521da] 2026-03-02 00:02:31.347470 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=61056674-f458-4118-8944-0d8cbda618ed] 2026-03-02 00:02:31.347942 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=143a36a5-297c-4e67-888b-5cce7baa02e1] 2026-03-02 00:02:31.353479 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=7e8f2e0c-b41f-4b83-9d6c-a0655d053bba] 2026-03-02 00:02:31.405571 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=efa3f915-a5b5-4a27-b49b-496543c700c8] 2026-03-02 00:02:32.926919 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=0a6dd127-3db7-404c-9951-b8fd38d14f0e] 2026-03-02 00:02:32.933359 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-02 00:02:32.935327 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-02 00:02:32.936264 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-02 00:02:33.369736 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=00edc010-3202-4946-a83a-f909c3c60374] 2026-03-02 00:02:33.387392 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=b5c16347-c7e8-4c51-9d5e-06d9eb1e8bba] 2026-03-02 00:02:33.387713 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-02 00:02:33.388872 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-02 00:02:33.391809 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-02 00:02:33.393068 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-02 00:02:33.393127 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-02 00:02:33.404410 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-02 00:02:33.408335 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-02 00:02:33.409125 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-02 00:02:33.411667 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-02 00:02:33.964008 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=957257f2-4ec9-4fbf-bde3-9ee69954f594] 2026-03-02 00:02:36.866349 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-02 00:02:36.866369 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=017e9abf-89a8-4d68-aa6c-9595cc5a57db] 2026-03-02 00:02:36.866373 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-02 00:02:36.866378 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=2355d369-0d73-4f64-93bd-a19b4c72dd38] 2026-03-02 00:02:36.866382 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-02 00:02:36.866386 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=610a982d-02ce-4d7b-87ab-5285a3ba24fa] 2026-03-02 00:02:36.866390 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-02 00:02:36.866394 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=b228e303-68f6-4384-aa05-050808ef4f35] 2026-03-02 00:02:36.866398 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-02 00:02:36.866402 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=a4e9c373-e0e5-4b9c-aab0-0d9021721aac] 2026-03-02 00:02:36.866406 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-02 00:02:36.866410 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=38ef0aa6-6310-4374-bec9-0bee4daef440] 2026-03-02 00:02:36.866414 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-02 00:02:36.866418 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=a859ca13-327d-4f51-a66f-bb698e09f6d5] 2026-03-02 00:02:36.866426 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=de5a8022-f1ef-400e-9405-80f9cd3610a2] 2026-03-02 00:02:36.866430 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=13269f18-2430-404f-abc4-d2df975de784] 2026-03-02 00:02:36.866434 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=75be38e7-2005-4756-89f6-c5c9f61a7d46] 2026-03-02 00:02:36.866437 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=3c78ce12-f813-4199-bdd1-159c1a1b431f] 2026-03-02 00:02:36.866441 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=caa83a1d-4cca-43f8-afcf-11301d215118] 2026-03-02 00:02:36.866445 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=05b2a28b-50f9-4381-aed6-6f38437a84d3] 2026-03-02 00:02:36.866449 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 3s [id=fb84dd7b-9dc9-4e0c-85a3-25e35447d811] 2026-03-02 00:02:36.866453 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=24a482eb-9dec-4e7d-ae0f-47c869f9322c] 2026-03-02 00:02:36.866466 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=d4389c53-2196-491a-8ccf-0f6b02cd96ee] 2026-03-02 00:02:36.866470 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-02 00:02:36.866475 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-02 00:02:36.866480 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-02 00:02:36.866483 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-02 00:02:36.873499 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-02 00:02:36.877135 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-02 00:02:36.878664 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-02 00:02:39.255292 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=05be3a60-5952-4115-97c0-4639a15da4a0] 2026-03-02 00:02:39.267224 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-02 00:02:39.271817 | orchestrator | local_file.inventory: Creating... 2026-03-02 00:02:39.274833 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-02 00:02:39.283508 | orchestrator | local_file.inventory: Creation complete after 0s [id=38470c6df7559fc2a0ab8d25ada3f3cf80e21c64] 2026-03-02 00:02:39.286087 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=aae725957e58ad6ad85682e406686437d9f09ef6] 2026-03-02 00:02:40.792644 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=05be3a60-5952-4115-97c0-4639a15da4a0] 2026-03-02 00:02:46.856401 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-02 00:02:46.867676 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-02 00:02:46.867783 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-02 00:02:46.875958 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-02 00:02:46.880315 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-02 00:02:46.880382 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-02 00:02:56.865336 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-02 00:02:56.868531 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-02 00:02:56.868628 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-02 00:02:56.876949 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-02 00:02:56.881202 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-02 00:02:56.881295 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-02 00:03:06.874312 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-02 00:03:06.874423 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-02 00:03:06.874436 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-02 00:03:06.877857 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-02 00:03:06.882136 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-02 00:03:06.882209 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-02 00:03:16.883531 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-02 00:03:16.883614 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-03-02 00:03:16.883623 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-02 00:03:16.883630 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-02 00:03:16.883644 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-02 00:03:16.883651 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-02 00:03:17.977034 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=e30c479a-9662-42de-947b-98a1f78ec7e0] 2026-03-02 00:03:26.886317 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-02 00:03:26.886421 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-02 00:03:26.886433 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-03-02 00:03:26.886439 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-02 00:03:26.886445 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-03-02 00:03:28.041849 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 51s [id=ca14492f-f050-4e18-9805-2f031986b3ae] 2026-03-02 00:03:28.046315 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 51s [id=87bacb12-a5b1-4ba8-a892-ce53f6841262] 2026-03-02 00:03:36.894643 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m0s elapsed] 2026-03-02 00:03:36.894726 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-03-02 00:03:36.894818 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-03-02 00:03:37.641479 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 1m1s [id=e746eb53-e53c-4e03-ad01-8abee5f945a0] 2026-03-02 00:03:38.548031 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m1s [id=af9c9b52-c045-423c-9af1-fac24211ce1c] 2026-03-02 00:03:38.548116 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m1s [id=14fc073a-fbd8-4330-b13d-2a736aa5dce4] 2026-03-02 00:03:38.832563 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-02 00:03:38.834698 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-02 00:03:38.837230 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-02 00:03:38.837463 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4636274336137504694] 2026-03-02 00:03:38.837785 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-02 00:03:38.838532 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-02 00:03:38.839516 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-02 00:03:38.845153 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-02 00:03:38.859856 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-02 00:03:38.861499 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-02 00:03:38.878038 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-02 00:03:38.879300 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-02 00:03:42.227937 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=af9c9b52-c045-423c-9af1-fac24211ce1c/b2184af5-6da0-496d-b48a-b0daa217c842] 2026-03-02 00:03:42.258287 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=ca14492f-f050-4e18-9805-2f031986b3ae/3458d56d-fe8a-4fae-86e7-5458fccbe7bb] 2026-03-02 00:03:42.265063 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=e30c479a-9662-42de-947b-98a1f78ec7e0/24fb8e5a-509d-4406-a727-cf15b40a450f] 2026-03-02 00:03:42.306005 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=ca14492f-f050-4e18-9805-2f031986b3ae/ac18fc5b-7614-46f9-bf3c-282e02a3d506] 2026-03-02 00:03:42.340671 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=e30c479a-9662-42de-947b-98a1f78ec7e0/7341868e-8f6a-460c-870a-5a0cce1fa311] 2026-03-02 00:03:48.354640 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=af9c9b52-c045-423c-9af1-fac24211ce1c/34a77e0f-df07-4c87-b046-7d039bca2077] 2026-03-02 00:03:48.411485 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=af9c9b52-c045-423c-9af1-fac24211ce1c/5b76853c-a11b-45e9-97a5-74de733f1116] 2026-03-02 00:03:48.429466 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=ca14492f-f050-4e18-9805-2f031986b3ae/a8122f95-b81e-4023-b303-8950dd4c9351] 2026-03-02 00:03:48.459180 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=e30c479a-9662-42de-947b-98a1f78ec7e0/e7c30f24-07cf-4e73-8c7c-bba1057c8cb7] 2026-03-02 00:03:48.882689 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-02 00:03:58.883413 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-02 00:03:59.226384 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=590d1ad5-3422-4c1c-8f5e-209eaa564ceb] 2026-03-02 00:03:59.242047 | orchestrator | 2026-03-02 00:03:59.242117 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-02 00:03:59.242127 | orchestrator | 2026-03-02 00:03:59.242134 | orchestrator | Outputs: 2026-03-02 00:03:59.242141 | orchestrator | 2026-03-02 00:03:59.242147 | orchestrator | manager_address = 2026-03-02 00:03:59.242154 | orchestrator | private_key = 2026-03-02 00:03:59.572850 | orchestrator | ok: Runtime: 0:01:41.097816 2026-03-02 00:03:59.605442 | 2026-03-02 00:03:59.605631 | TASK [Create infrastructure (stable)] 2026-03-02 00:04:00.139064 | orchestrator | skipping: Conditional result was False 2026-03-02 00:04:00.158564 | 2026-03-02 00:04:00.158749 | TASK [Fetch manager address] 2026-03-02 00:04:00.673203 | orchestrator | ok 2026-03-02 00:04:00.682504 | 2026-03-02 00:04:00.682635 | TASK [Set manager_host address] 2026-03-02 00:04:00.753271 | orchestrator | ok 2026-03-02 00:04:00.762745 | 2026-03-02 00:04:00.762908 | LOOP [Update ansible collections] 2026-03-02 00:04:01.896696 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-02 00:04:01.897082 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-02 00:04:01.897149 | orchestrator | Starting galaxy collection install process 2026-03-02 00:04:01.897193 | orchestrator | Process install dependency map 2026-03-02 00:04:01.897285 | orchestrator | Starting collection install process 2026-03-02 00:04:01.897331 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-02 00:04:01.897375 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-02 00:04:01.897431 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-02 00:04:01.897514 | orchestrator | ok: Item: commons Runtime: 0:00:00.774632 2026-03-02 00:04:02.991606 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-02 00:04:02.991773 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-02 00:04:02.991823 | orchestrator | Starting galaxy collection install process 2026-03-02 00:04:02.991860 | orchestrator | Process install dependency map 2026-03-02 00:04:02.991895 | orchestrator | Starting collection install process 2026-03-02 00:04:02.991929 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-02 00:04:02.991960 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-02 00:04:02.991991 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-02 00:04:02.992041 | orchestrator | ok: Item: services Runtime: 0:00:00.799337 2026-03-02 00:04:03.010553 | 2026-03-02 00:04:03.010694 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-02 00:04:13.588872 | orchestrator | ok 2026-03-02 00:04:13.600428 | 2026-03-02 00:04:13.600559 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-02 00:05:13.642754 | orchestrator | ok 2026-03-02 00:05:13.653056 | 2026-03-02 00:05:13.653178 | TASK [Fetch manager ssh hostkey] 2026-03-02 00:05:15.235384 | orchestrator | Output suppressed because no_log was given 2026-03-02 00:05:15.250556 | 2026-03-02 00:05:15.250743 | TASK [Get ssh keypair from terraform environment] 2026-03-02 00:05:15.788936 | orchestrator | ok: Runtime: 0:00:00.008068 2026-03-02 00:05:15.806440 | 2026-03-02 00:05:15.806610 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-02 00:05:15.855473 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-02 00:05:15.868883 | 2026-03-02 00:05:15.869021 | TASK [Run manager part 0] 2026-03-02 00:05:16.979491 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-02 00:05:17.056416 | orchestrator | 2026-03-02 00:05:17.056491 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-02 00:05:17.056502 | orchestrator | 2026-03-02 00:05:17.056520 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-02 00:05:18.915415 | orchestrator | ok: [testbed-manager] 2026-03-02 00:05:18.915496 | orchestrator | 2026-03-02 00:05:18.915531 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-02 00:05:18.915545 | orchestrator | 2026-03-02 00:05:18.915559 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-02 00:05:20.843075 | orchestrator | ok: [testbed-manager] 2026-03-02 00:05:20.843174 | orchestrator | 2026-03-02 00:05:20.843188 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-02 00:05:21.467928 | orchestrator | ok: [testbed-manager] 2026-03-02 00:05:21.467985 | orchestrator | 2026-03-02 00:05:21.467993 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-02 00:05:21.505358 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:05:21.505400 | orchestrator | 2026-03-02 00:05:21.505409 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-02 00:05:21.532681 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:05:21.532734 | orchestrator | 2026-03-02 00:05:21.532742 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-02 00:05:21.557215 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:05:21.557267 | orchestrator | 2026-03-02 00:05:21.557276 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-02 00:05:21.584277 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:05:21.584325 | orchestrator | 2026-03-02 00:05:21.584334 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-02 00:05:21.612376 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:05:21.612421 | orchestrator | 2026-03-02 00:05:21.612432 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-02 00:05:21.648500 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:05:21.648538 | orchestrator | 2026-03-02 00:05:21.648545 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-02 00:05:21.677035 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:05:21.677074 | orchestrator | 2026-03-02 00:05:21.677081 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-02 00:05:22.376190 | orchestrator | changed: [testbed-manager] 2026-03-02 00:05:22.376233 | orchestrator | 2026-03-02 00:05:22.376242 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-02 00:08:11.141597 | orchestrator | changed: [testbed-manager] 2026-03-02 00:08:11.141685 | orchestrator | 2026-03-02 00:08:11.141703 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-02 00:09:51.179240 | orchestrator | changed: [testbed-manager] 2026-03-02 00:09:51.179372 | orchestrator | 2026-03-02 00:09:51.179379 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-02 00:10:17.449450 | orchestrator | changed: [testbed-manager] 2026-03-02 00:10:17.449523 | orchestrator | 2026-03-02 00:10:17.449542 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-02 00:10:27.010386 | orchestrator | changed: [testbed-manager] 2026-03-02 00:10:27.010464 | orchestrator | 2026-03-02 00:10:27.010475 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-02 00:10:27.065242 | orchestrator | ok: [testbed-manager] 2026-03-02 00:10:27.065334 | orchestrator | 2026-03-02 00:10:27.065352 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-02 00:10:27.887016 | orchestrator | ok: [testbed-manager] 2026-03-02 00:10:27.887101 | orchestrator | 2026-03-02 00:10:27.887120 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-02 00:10:28.627598 | orchestrator | changed: [testbed-manager] 2026-03-02 00:10:28.627680 | orchestrator | 2026-03-02 00:10:28.627698 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-02 00:10:34.870709 | orchestrator | changed: [testbed-manager] 2026-03-02 00:10:34.870950 | orchestrator | 2026-03-02 00:10:34.871009 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-02 00:10:40.833006 | orchestrator | changed: [testbed-manager] 2026-03-02 00:10:40.833096 | orchestrator | 2026-03-02 00:10:40.833114 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-02 00:10:43.428195 | orchestrator | changed: [testbed-manager] 2026-03-02 00:10:43.428283 | orchestrator | 2026-03-02 00:10:43.428299 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-02 00:10:45.106322 | orchestrator | changed: [testbed-manager] 2026-03-02 00:10:45.106690 | orchestrator | 2026-03-02 00:10:45.106700 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-02 00:10:46.147163 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-02 00:10:46.147228 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-02 00:10:46.147237 | orchestrator | 2026-03-02 00:10:46.147244 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-02 00:10:46.183257 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-02 00:10:46.183300 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-02 00:10:46.183306 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-02 00:10:46.183311 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-02 00:10:49.381602 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-02 00:10:49.381691 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-02 00:10:49.381705 | orchestrator | 2026-03-02 00:10:49.381718 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-02 00:10:49.944554 | orchestrator | changed: [testbed-manager] 2026-03-02 00:10:49.944594 | orchestrator | 2026-03-02 00:10:49.944602 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-02 00:13:11.196101 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-02 00:13:11.196450 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-02 00:13:11.196474 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-02 00:13:11.196486 | orchestrator | 2026-03-02 00:13:11.196497 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-02 00:13:13.561524 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-02 00:13:13.561562 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-02 00:13:13.561568 | orchestrator | 2026-03-02 00:13:13.561572 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-02 00:13:13.561577 | orchestrator | 2026-03-02 00:13:13.561582 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-02 00:13:14.997323 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:14.997423 | orchestrator | 2026-03-02 00:13:14.997441 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-02 00:13:15.050680 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:15.050743 | orchestrator | 2026-03-02 00:13:15.050753 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-02 00:13:15.118656 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:15.118744 | orchestrator | 2026-03-02 00:13:15.118764 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-02 00:13:15.959475 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:15.959516 | orchestrator | 2026-03-02 00:13:15.959523 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-02 00:13:16.690363 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:16.690457 | orchestrator | 2026-03-02 00:13:16.690474 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-02 00:13:18.079356 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-02 00:13:18.079453 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-02 00:13:18.079466 | orchestrator | 2026-03-02 00:13:18.079500 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-02 00:13:19.524507 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:19.524653 | orchestrator | 2026-03-02 00:13:19.524674 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-02 00:13:21.316138 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-02 00:13:21.316233 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-02 00:13:21.316248 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-02 00:13:21.316260 | orchestrator | 2026-03-02 00:13:21.316274 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-02 00:13:21.371987 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:13:21.372051 | orchestrator | 2026-03-02 00:13:21.372061 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-02 00:13:21.439114 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:13:21.439714 | orchestrator | 2026-03-02 00:13:21.439734 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-02 00:13:22.022252 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:22.022321 | orchestrator | 2026-03-02 00:13:22.022329 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-02 00:13:22.101238 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:13:22.101299 | orchestrator | 2026-03-02 00:13:22.101309 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-02 00:13:22.976065 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-02 00:13:22.976418 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:22.976461 | orchestrator | 2026-03-02 00:13:22.976474 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-02 00:13:23.010920 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:13:23.010996 | orchestrator | 2026-03-02 00:13:23.011010 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-02 00:13:23.041807 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:13:23.041886 | orchestrator | 2026-03-02 00:13:23.041902 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-02 00:13:23.073140 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:13:23.073224 | orchestrator | 2026-03-02 00:13:23.073247 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-02 00:13:23.150700 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:13:23.150909 | orchestrator | 2026-03-02 00:13:23.150932 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-02 00:13:23.861638 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:23.861687 | orchestrator | 2026-03-02 00:13:23.861693 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-02 00:13:23.861698 | orchestrator | 2026-03-02 00:13:23.861702 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-02 00:13:25.271904 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:25.271961 | orchestrator | 2026-03-02 00:13:25.271968 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-02 00:13:26.170830 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:26.170869 | orchestrator | 2026-03-02 00:13:26.170876 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:13:26.170882 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-02 00:13:26.170887 | orchestrator | 2026-03-02 00:13:26.697836 | orchestrator | ok: Runtime: 0:08:10.033975 2026-03-02 00:13:26.716463 | 2026-03-02 00:13:26.716597 | TASK [Point out that the log in on the manager is now possible] 2026-03-02 00:13:26.750244 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-02 00:13:26.758442 | 2026-03-02 00:13:26.758545 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-02 00:13:26.791448 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-02 00:13:26.798572 | 2026-03-02 00:13:26.798672 | TASK [Run manager part 1 + 2] 2026-03-02 00:13:27.637861 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-02 00:13:27.693267 | orchestrator | 2026-03-02 00:13:27.693330 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-02 00:13:27.693339 | orchestrator | 2026-03-02 00:13:27.693353 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-02 00:13:30.506568 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:30.506637 | orchestrator | 2026-03-02 00:13:30.506677 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-02 00:13:30.541567 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:13:30.541628 | orchestrator | 2026-03-02 00:13:30.541640 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-02 00:13:30.577556 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:30.577610 | orchestrator | 2026-03-02 00:13:30.577617 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-02 00:13:30.621167 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:30.621223 | orchestrator | 2026-03-02 00:13:30.621234 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-02 00:13:30.686119 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:30.686173 | orchestrator | 2026-03-02 00:13:30.686181 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-02 00:13:30.743951 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:30.744009 | orchestrator | 2026-03-02 00:13:30.744018 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-02 00:13:30.794783 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-02 00:13:30.794828 | orchestrator | 2026-03-02 00:13:30.794835 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-02 00:13:31.446091 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:31.446152 | orchestrator | 2026-03-02 00:13:31.446163 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-02 00:13:31.482329 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:13:31.482561 | orchestrator | 2026-03-02 00:13:31.483635 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-02 00:13:32.794195 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:32.794243 | orchestrator | 2026-03-02 00:13:32.794252 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-02 00:13:33.352205 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:33.352251 | orchestrator | 2026-03-02 00:13:33.352258 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-02 00:13:34.407701 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:34.407747 | orchestrator | 2026-03-02 00:13:34.407754 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-02 00:13:49.481381 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:49.481504 | orchestrator | 2026-03-02 00:13:49.481633 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-02 00:13:50.142935 | orchestrator | ok: [testbed-manager] 2026-03-02 00:13:50.143027 | orchestrator | 2026-03-02 00:13:50.143046 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-02 00:13:50.228164 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:13:50.228279 | orchestrator | 2026-03-02 00:13:50.228308 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-02 00:13:51.143056 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:51.143130 | orchestrator | 2026-03-02 00:13:51.143146 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-02 00:13:52.078119 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:52.078160 | orchestrator | 2026-03-02 00:13:52.078168 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-02 00:13:52.644455 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:52.644494 | orchestrator | 2026-03-02 00:13:52.644501 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-02 00:13:52.685575 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-02 00:13:52.685677 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-02 00:13:52.685693 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-02 00:13:52.685706 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-02 00:13:54.831097 | orchestrator | changed: [testbed-manager] 2026-03-02 00:13:54.831878 | orchestrator | 2026-03-02 00:13:54.831913 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-02 00:14:03.490258 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-02 00:14:03.490517 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-02 00:14:03.490546 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-02 00:14:03.490559 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-02 00:14:03.490578 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-02 00:14:03.490589 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-02 00:14:03.490601 | orchestrator | 2026-03-02 00:14:03.490613 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-02 00:14:04.574205 | orchestrator | changed: [testbed-manager] 2026-03-02 00:14:04.574513 | orchestrator | 2026-03-02 00:14:04.574534 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-02 00:14:04.617087 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:14:04.617186 | orchestrator | 2026-03-02 00:14:04.617205 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-02 00:14:07.579781 | orchestrator | changed: [testbed-manager] 2026-03-02 00:14:07.579875 | orchestrator | 2026-03-02 00:14:07.579893 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-02 00:14:07.624025 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:14:07.624105 | orchestrator | 2026-03-02 00:14:07.624121 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-02 00:15:38.619971 | orchestrator | changed: [testbed-manager] 2026-03-02 00:15:38.620068 | orchestrator | 2026-03-02 00:15:38.620089 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-02 00:15:39.724072 | orchestrator | ok: [testbed-manager] 2026-03-02 00:15:39.724156 | orchestrator | 2026-03-02 00:15:39.724171 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:15:39.724185 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-02 00:15:39.724197 | orchestrator | 2026-03-02 00:15:39.913674 | orchestrator | ok: Runtime: 0:02:12.711887 2026-03-02 00:15:39.932826 | 2026-03-02 00:15:39.932996 | TASK [Reboot manager] 2026-03-02 00:15:41.467834 | orchestrator | ok: Runtime: 0:00:00.966811 2026-03-02 00:15:41.484159 | 2026-03-02 00:15:41.484311 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-02 00:15:55.347162 | orchestrator | ok 2026-03-02 00:15:55.359362 | 2026-03-02 00:15:55.359494 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-02 00:16:55.395150 | orchestrator | ok 2026-03-02 00:16:55.401914 | 2026-03-02 00:16:55.402019 | TASK [Deploy manager + bootstrap nodes] 2026-03-02 00:16:57.817153 | orchestrator | 2026-03-02 00:16:57.817408 | orchestrator | # DEPLOY MANAGER 2026-03-02 00:16:57.817436 | orchestrator | 2026-03-02 00:16:57.817451 | orchestrator | + set -e 2026-03-02 00:16:57.817465 | orchestrator | + echo 2026-03-02 00:16:57.817479 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-02 00:16:57.817497 | orchestrator | + echo 2026-03-02 00:16:57.817544 | orchestrator | + cat /opt/manager-vars.sh 2026-03-02 00:16:57.819898 | orchestrator | export NUMBER_OF_NODES=6 2026-03-02 00:16:57.819928 | orchestrator | 2026-03-02 00:16:57.819941 | orchestrator | export CEPH_VERSION=reef 2026-03-02 00:16:57.819955 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-02 00:16:57.819967 | orchestrator | export MANAGER_VERSION=latest 2026-03-02 00:16:57.819990 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-02 00:16:57.820002 | orchestrator | 2026-03-02 00:16:57.820020 | orchestrator | export ARA=false 2026-03-02 00:16:57.820032 | orchestrator | export DEPLOY_MODE=manager 2026-03-02 00:16:57.820049 | orchestrator | export TEMPEST=true 2026-03-02 00:16:57.820061 | orchestrator | export IS_ZUUL=true 2026-03-02 00:16:57.820072 | orchestrator | 2026-03-02 00:16:57.820090 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.58 2026-03-02 00:16:57.820102 | orchestrator | export EXTERNAL_API=false 2026-03-02 00:16:57.820113 | orchestrator | 2026-03-02 00:16:57.820124 | orchestrator | export IMAGE_USER=ubuntu 2026-03-02 00:16:57.820139 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-02 00:16:57.820150 | orchestrator | 2026-03-02 00:16:57.820161 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-02 00:16:57.820178 | orchestrator | 2026-03-02 00:16:57.820190 | orchestrator | + echo 2026-03-02 00:16:57.820207 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-02 00:16:57.821113 | orchestrator | ++ export INTERACTIVE=false 2026-03-02 00:16:57.821131 | orchestrator | ++ INTERACTIVE=false 2026-03-02 00:16:57.821145 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-02 00:16:57.821161 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-02 00:16:57.821409 | orchestrator | + source /opt/manager-vars.sh 2026-03-02 00:16:57.821428 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-02 00:16:57.821529 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-02 00:16:57.821544 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-02 00:16:57.821555 | orchestrator | ++ CEPH_VERSION=reef 2026-03-02 00:16:57.821570 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-02 00:16:57.821582 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-02 00:16:57.821593 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-02 00:16:57.821604 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-02 00:16:57.821616 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-02 00:16:57.821636 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-02 00:16:57.821652 | orchestrator | ++ export ARA=false 2026-03-02 00:16:57.821663 | orchestrator | ++ ARA=false 2026-03-02 00:16:57.821674 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-02 00:16:57.821685 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-02 00:16:57.821696 | orchestrator | ++ export TEMPEST=true 2026-03-02 00:16:57.821707 | orchestrator | ++ TEMPEST=true 2026-03-02 00:16:57.821718 | orchestrator | ++ export IS_ZUUL=true 2026-03-02 00:16:57.821729 | orchestrator | ++ IS_ZUUL=true 2026-03-02 00:16:57.821739 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.58 2026-03-02 00:16:57.821751 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.58 2026-03-02 00:16:57.821762 | orchestrator | ++ export EXTERNAL_API=false 2026-03-02 00:16:57.821773 | orchestrator | ++ EXTERNAL_API=false 2026-03-02 00:16:57.821788 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-02 00:16:57.821799 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-02 00:16:57.821810 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-02 00:16:57.821821 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-02 00:16:57.821832 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-02 00:16:57.821843 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-02 00:16:57.821854 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-02 00:16:57.873960 | orchestrator | + docker version 2026-03-02 00:16:57.989464 | orchestrator | Client: Docker Engine - Community 2026-03-02 00:16:57.989560 | orchestrator | Version: 27.5.1 2026-03-02 00:16:57.989576 | orchestrator | API version: 1.47 2026-03-02 00:16:57.989590 | orchestrator | Go version: go1.22.11 2026-03-02 00:16:57.989601 | orchestrator | Git commit: 9f9e405 2026-03-02 00:16:57.989612 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-02 00:16:57.989625 | orchestrator | OS/Arch: linux/amd64 2026-03-02 00:16:57.989636 | orchestrator | Context: default 2026-03-02 00:16:57.989647 | orchestrator | 2026-03-02 00:16:57.989659 | orchestrator | Server: Docker Engine - Community 2026-03-02 00:16:57.989671 | orchestrator | Engine: 2026-03-02 00:16:57.989682 | orchestrator | Version: 27.5.1 2026-03-02 00:16:57.989693 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-02 00:16:57.989736 | orchestrator | Go version: go1.22.11 2026-03-02 00:16:57.989748 | orchestrator | Git commit: 4c9b3b0 2026-03-02 00:16:57.989759 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-02 00:16:57.989770 | orchestrator | OS/Arch: linux/amd64 2026-03-02 00:16:57.989781 | orchestrator | Experimental: false 2026-03-02 00:16:57.989792 | orchestrator | containerd: 2026-03-02 00:16:57.989803 | orchestrator | Version: v2.2.1 2026-03-02 00:16:57.989814 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-02 00:16:57.989825 | orchestrator | runc: 2026-03-02 00:16:57.989836 | orchestrator | Version: 1.3.4 2026-03-02 00:16:57.989847 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-02 00:16:57.989858 | orchestrator | docker-init: 2026-03-02 00:16:57.989869 | orchestrator | Version: 0.19.0 2026-03-02 00:16:57.989881 | orchestrator | GitCommit: de40ad0 2026-03-02 00:16:57.990661 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-02 00:16:57.999153 | orchestrator | + set -e 2026-03-02 00:16:57.999220 | orchestrator | + source /opt/manager-vars.sh 2026-03-02 00:16:57.999234 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-02 00:16:57.999248 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-02 00:16:57.999259 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-02 00:16:57.999302 | orchestrator | ++ CEPH_VERSION=reef 2026-03-02 00:16:57.999314 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-02 00:16:57.999326 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-02 00:16:57.999337 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-02 00:16:57.999349 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-02 00:16:57.999360 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-02 00:16:57.999371 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-02 00:16:57.999382 | orchestrator | ++ export ARA=false 2026-03-02 00:16:57.999393 | orchestrator | ++ ARA=false 2026-03-02 00:16:57.999404 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-02 00:16:57.999416 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-02 00:16:57.999427 | orchestrator | ++ export TEMPEST=true 2026-03-02 00:16:57.999438 | orchestrator | ++ TEMPEST=true 2026-03-02 00:16:57.999448 | orchestrator | ++ export IS_ZUUL=true 2026-03-02 00:16:57.999459 | orchestrator | ++ IS_ZUUL=true 2026-03-02 00:16:57.999470 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.58 2026-03-02 00:16:57.999481 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.58 2026-03-02 00:16:57.999493 | orchestrator | ++ export EXTERNAL_API=false 2026-03-02 00:16:57.999504 | orchestrator | ++ EXTERNAL_API=false 2026-03-02 00:16:57.999514 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-02 00:16:57.999525 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-02 00:16:57.999545 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-02 00:16:57.999556 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-02 00:16:57.999568 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-02 00:16:57.999579 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-02 00:16:57.999590 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-02 00:16:57.999601 | orchestrator | ++ export INTERACTIVE=false 2026-03-02 00:16:57.999612 | orchestrator | ++ INTERACTIVE=false 2026-03-02 00:16:57.999622 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-02 00:16:57.999637 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-02 00:16:57.999649 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-02 00:16:57.999659 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-02 00:16:57.999670 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-02 00:16:58.006588 | orchestrator | + set -e 2026-03-02 00:16:58.006651 | orchestrator | + VERSION=reef 2026-03-02 00:16:58.007004 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-02 00:16:58.011307 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-02 00:16:58.011368 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-02 00:16:58.016887 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-02 00:16:58.023168 | orchestrator | + set -e 2026-03-02 00:16:58.023565 | orchestrator | + VERSION=2024.2 2026-03-02 00:16:58.023994 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-02 00:16:58.027429 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-02 00:16:58.027463 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-02 00:16:58.032373 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-02 00:16:58.032884 | orchestrator | ++ semver latest 7.0.0 2026-03-02 00:16:58.092945 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-02 00:16:58.093038 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-02 00:16:58.093053 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-02 00:16:58.093598 | orchestrator | ++ semver latest 10.0.0-0 2026-03-02 00:16:58.150690 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-02 00:16:58.151673 | orchestrator | ++ semver 2024.2 2025.1 2026-03-02 00:16:58.204684 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-02 00:16:58.204784 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-02 00:16:58.295565 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-02 00:16:58.296804 | orchestrator | + source /opt/venv/bin/activate 2026-03-02 00:16:58.298135 | orchestrator | ++ deactivate nondestructive 2026-03-02 00:16:58.298160 | orchestrator | ++ '[' -n '' ']' 2026-03-02 00:16:58.298174 | orchestrator | ++ '[' -n '' ']' 2026-03-02 00:16:58.298187 | orchestrator | ++ hash -r 2026-03-02 00:16:58.298205 | orchestrator | ++ '[' -n '' ']' 2026-03-02 00:16:58.298216 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-02 00:16:58.298228 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-02 00:16:58.298242 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-02 00:16:58.298254 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-02 00:16:58.298297 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-02 00:16:58.298310 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-02 00:16:58.298321 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-02 00:16:58.298337 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-02 00:16:58.298349 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-02 00:16:58.298415 | orchestrator | ++ export PATH 2026-03-02 00:16:58.298433 | orchestrator | ++ '[' -n '' ']' 2026-03-02 00:16:58.298549 | orchestrator | ++ '[' -z '' ']' 2026-03-02 00:16:58.298563 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-02 00:16:58.298575 | orchestrator | ++ PS1='(venv) ' 2026-03-02 00:16:58.298586 | orchestrator | ++ export PS1 2026-03-02 00:16:58.298597 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-02 00:16:58.298613 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-02 00:16:58.298700 | orchestrator | ++ hash -r 2026-03-02 00:16:58.298770 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-02 00:16:59.404392 | orchestrator | 2026-03-02 00:16:59.404480 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-02 00:16:59.404490 | orchestrator | 2026-03-02 00:16:59.404497 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-02 00:16:59.931908 | orchestrator | ok: [testbed-manager] 2026-03-02 00:16:59.932014 | orchestrator | 2026-03-02 00:16:59.932027 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-02 00:17:00.851816 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:00.851880 | orchestrator | 2026-03-02 00:17:00.851888 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-02 00:17:00.851893 | orchestrator | 2026-03-02 00:17:00.851898 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-02 00:17:02.907821 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:02.907927 | orchestrator | 2026-03-02 00:17:02.907945 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-02 00:17:02.956091 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:02.956187 | orchestrator | 2026-03-02 00:17:02.956207 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-02 00:17:03.383657 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:03.383785 | orchestrator | 2026-03-02 00:17:03.383802 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-02 00:17:03.410833 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:17:03.410928 | orchestrator | 2026-03-02 00:17:03.410945 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-02 00:17:03.724214 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:03.724334 | orchestrator | 2026-03-02 00:17:03.724353 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-02 00:17:04.043369 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:04.043466 | orchestrator | 2026-03-02 00:17:04.043483 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-02 00:17:04.153255 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:17:04.153364 | orchestrator | 2026-03-02 00:17:04.153376 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-02 00:17:04.153386 | orchestrator | 2026-03-02 00:17:04.153395 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-02 00:17:05.801926 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:05.802073 | orchestrator | 2026-03-02 00:17:05.802093 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-02 00:17:05.890758 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-02 00:17:05.890848 | orchestrator | 2026-03-02 00:17:05.890863 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-02 00:17:05.943547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-02 00:17:05.943623 | orchestrator | 2026-03-02 00:17:05.943637 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-02 00:17:07.003745 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-02 00:17:07.003819 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-02 00:17:07.003830 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-02 00:17:07.003839 | orchestrator | 2026-03-02 00:17:07.003849 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-02 00:17:08.734698 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-02 00:17:08.734790 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-02 00:17:08.734833 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-02 00:17:08.734847 | orchestrator | 2026-03-02 00:17:08.734860 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-02 00:17:09.355739 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-02 00:17:09.355836 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:09.355854 | orchestrator | 2026-03-02 00:17:09.355867 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-02 00:17:09.986332 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-02 00:17:09.986456 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:09.986473 | orchestrator | 2026-03-02 00:17:09.986486 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-02 00:17:10.045410 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:17:10.045495 | orchestrator | 2026-03-02 00:17:10.045509 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-02 00:17:10.389803 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:10.389940 | orchestrator | 2026-03-02 00:17:10.389964 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-02 00:17:10.453213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-02 00:17:10.453332 | orchestrator | 2026-03-02 00:17:10.453349 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-02 00:17:11.498798 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:11.498901 | orchestrator | 2026-03-02 00:17:11.498919 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-02 00:17:12.294000 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:12.294140 | orchestrator | 2026-03-02 00:17:12.294163 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-02 00:17:25.036732 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:25.036845 | orchestrator | 2026-03-02 00:17:25.036878 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-02 00:17:25.080583 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:17:25.080677 | orchestrator | 2026-03-02 00:17:25.080693 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-02 00:17:25.080706 | orchestrator | 2026-03-02 00:17:25.080718 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-02 00:17:26.567728 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:26.567825 | orchestrator | 2026-03-02 00:17:26.567874 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-02 00:17:26.667464 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-02 00:17:26.667550 | orchestrator | 2026-03-02 00:17:26.667566 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-02 00:17:26.720379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-02 00:17:26.720457 | orchestrator | 2026-03-02 00:17:26.720471 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-02 00:17:28.683095 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:28.683204 | orchestrator | 2026-03-02 00:17:28.683230 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-02 00:17:28.732821 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:28.732924 | orchestrator | 2026-03-02 00:17:28.733832 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-02 00:17:28.853588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-02 00:17:28.853679 | orchestrator | 2026-03-02 00:17:28.853697 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-02 00:17:31.312582 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-02 00:17:31.312659 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-02 00:17:31.312668 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-02 00:17:31.312677 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-02 00:17:31.312684 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-02 00:17:31.312692 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-02 00:17:31.312700 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-02 00:17:31.312708 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-02 00:17:31.312716 | orchestrator | 2026-03-02 00:17:31.312724 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-02 00:17:31.859303 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:31.859394 | orchestrator | 2026-03-02 00:17:31.859411 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-02 00:17:32.392693 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:32.392823 | orchestrator | 2026-03-02 00:17:32.392851 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-02 00:17:32.461956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-02 00:17:32.462086 | orchestrator | 2026-03-02 00:17:32.462103 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-02 00:17:33.521648 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-02 00:17:33.521746 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-02 00:17:33.521762 | orchestrator | 2026-03-02 00:17:33.521775 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-02 00:17:34.112868 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:34.112957 | orchestrator | 2026-03-02 00:17:34.112971 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-02 00:17:34.169232 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:17:34.169352 | orchestrator | 2026-03-02 00:17:34.169369 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-02 00:17:34.246577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-02 00:17:34.246675 | orchestrator | 2026-03-02 00:17:34.246693 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-02 00:17:34.837821 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:34.837916 | orchestrator | 2026-03-02 00:17:34.837932 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-02 00:17:34.892176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-02 00:17:34.892339 | orchestrator | 2026-03-02 00:17:34.892357 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-02 00:17:36.219758 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-02 00:17:36.219851 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-02 00:17:36.219865 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:36.219878 | orchestrator | 2026-03-02 00:17:36.219891 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-02 00:17:36.818144 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:36.818275 | orchestrator | 2026-03-02 00:17:36.818293 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-02 00:17:36.873984 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:17:36.874120 | orchestrator | 2026-03-02 00:17:36.874136 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-02 00:17:36.957517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-02 00:17:36.957670 | orchestrator | 2026-03-02 00:17:36.957698 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-02 00:17:37.452665 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:37.452760 | orchestrator | 2026-03-02 00:17:37.452797 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-02 00:17:37.836983 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:37.837101 | orchestrator | 2026-03-02 00:17:37.837127 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-02 00:17:39.049826 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-02 00:17:39.049922 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-02 00:17:39.049937 | orchestrator | 2026-03-02 00:17:39.049951 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-02 00:17:39.671077 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:39.671178 | orchestrator | 2026-03-02 00:17:39.671195 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-02 00:17:40.043990 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:40.044083 | orchestrator | 2026-03-02 00:17:40.044098 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-02 00:17:40.387728 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:40.387821 | orchestrator | 2026-03-02 00:17:40.387840 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-02 00:17:40.440215 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:17:40.440365 | orchestrator | 2026-03-02 00:17:40.440382 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-02 00:17:40.523157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-02 00:17:40.523333 | orchestrator | 2026-03-02 00:17:40.523362 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-02 00:17:40.563284 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:40.563374 | orchestrator | 2026-03-02 00:17:40.563389 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-02 00:17:42.534547 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-02 00:17:42.534650 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-02 00:17:42.534666 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-02 00:17:42.534678 | orchestrator | 2026-03-02 00:17:42.534691 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-02 00:17:43.255969 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:43.256085 | orchestrator | 2026-03-02 00:17:43.256111 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-02 00:17:43.954105 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:43.954200 | orchestrator | 2026-03-02 00:17:43.954217 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-02 00:17:44.631705 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:44.631797 | orchestrator | 2026-03-02 00:17:44.631815 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-02 00:17:44.692029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-02 00:17:44.692142 | orchestrator | 2026-03-02 00:17:44.692664 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-02 00:17:44.719810 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:44.719894 | orchestrator | 2026-03-02 00:17:44.719908 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-02 00:17:45.335597 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-02 00:17:45.335682 | orchestrator | 2026-03-02 00:17:45.335696 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-02 00:17:45.406339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-02 00:17:45.406435 | orchestrator | 2026-03-02 00:17:45.406453 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-02 00:17:46.034726 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:46.034824 | orchestrator | 2026-03-02 00:17:46.034841 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-02 00:17:46.542657 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:46.542742 | orchestrator | 2026-03-02 00:17:46.542758 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-02 00:17:46.585811 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:17:46.585899 | orchestrator | 2026-03-02 00:17:46.585916 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-02 00:17:46.625383 | orchestrator | ok: [testbed-manager] 2026-03-02 00:17:46.625453 | orchestrator | 2026-03-02 00:17:46.625463 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-02 00:17:47.340645 | orchestrator | changed: [testbed-manager] 2026-03-02 00:17:47.340741 | orchestrator | 2026-03-02 00:17:47.340759 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-02 00:18:49.200936 | orchestrator | changed: [testbed-manager] 2026-03-02 00:18:49.201266 | orchestrator | 2026-03-02 00:18:49.201301 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-02 00:18:50.056777 | orchestrator | ok: [testbed-manager] 2026-03-02 00:18:50.056849 | orchestrator | 2026-03-02 00:18:50.056856 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-02 00:18:50.110947 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:18:50.111002 | orchestrator | 2026-03-02 00:18:50.111008 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-02 00:18:52.405066 | orchestrator | changed: [testbed-manager] 2026-03-02 00:18:52.405168 | orchestrator | 2026-03-02 00:18:52.405249 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-02 00:18:52.467864 | orchestrator | ok: [testbed-manager] 2026-03-02 00:18:52.467952 | orchestrator | 2026-03-02 00:18:52.467989 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-02 00:18:52.468002 | orchestrator | 2026-03-02 00:18:52.468013 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-02 00:18:52.516718 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:18:52.516814 | orchestrator | 2026-03-02 00:18:52.516833 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-02 00:19:52.569898 | orchestrator | Pausing for 60 seconds 2026-03-02 00:19:52.570112 | orchestrator | changed: [testbed-manager] 2026-03-02 00:19:52.570133 | orchestrator | 2026-03-02 00:19:52.570147 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-02 00:19:55.100863 | orchestrator | changed: [testbed-manager] 2026-03-02 00:19:55.100960 | orchestrator | 2026-03-02 00:19:55.100976 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-02 00:20:36.567749 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-02 00:20:36.567827 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-02 00:20:36.567840 | orchestrator | changed: [testbed-manager] 2026-03-02 00:20:36.567866 | orchestrator | 2026-03-02 00:20:36.567875 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-02 00:20:46.743094 | orchestrator | changed: [testbed-manager] 2026-03-02 00:20:46.743195 | orchestrator | 2026-03-02 00:20:46.743203 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-02 00:20:46.832272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-02 00:20:46.832316 | orchestrator | 2026-03-02 00:20:46.832321 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-02 00:20:46.832327 | orchestrator | 2026-03-02 00:20:46.832331 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-02 00:20:46.884543 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:20:46.884602 | orchestrator | 2026-03-02 00:20:46.884608 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-02 00:20:46.956933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-02 00:20:46.956990 | orchestrator | 2026-03-02 00:20:46.956996 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-02 00:20:47.778149 | orchestrator | changed: [testbed-manager] 2026-03-02 00:20:47.778207 | orchestrator | 2026-03-02 00:20:47.778213 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-02 00:20:50.762569 | orchestrator | ok: [testbed-manager] 2026-03-02 00:20:50.762670 | orchestrator | 2026-03-02 00:20:50.762688 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-02 00:20:50.835772 | orchestrator | ok: [testbed-manager] => { 2026-03-02 00:20:50.835857 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-02 00:20:50.835871 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-02 00:20:50.835882 | orchestrator | "Checking running containers against expected versions...", 2026-03-02 00:20:50.835894 | orchestrator | "", 2026-03-02 00:20:50.835908 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-02 00:20:50.835919 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-02 00:20:50.835930 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.835941 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-02 00:20:50.835951 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.835962 | orchestrator | "", 2026-03-02 00:20:50.835973 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-02 00:20:50.835983 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-02 00:20:50.835994 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836004 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-02 00:20:50.836015 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836025 | orchestrator | "", 2026-03-02 00:20:50.836036 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-02 00:20:50.836046 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-02 00:20:50.836056 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836067 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-02 00:20:50.836077 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836088 | orchestrator | "", 2026-03-02 00:20:50.836098 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-02 00:20:50.836109 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-02 00:20:50.836120 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836168 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-02 00:20:50.836179 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836190 | orchestrator | "", 2026-03-02 00:20:50.836201 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-02 00:20:50.836211 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-02 00:20:50.836245 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836256 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-02 00:20:50.836266 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836277 | orchestrator | "", 2026-03-02 00:20:50.836287 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-02 00:20:50.836298 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836309 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836320 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836330 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836341 | orchestrator | "", 2026-03-02 00:20:50.836353 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-02 00:20:50.836365 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-02 00:20:50.836376 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836387 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-02 00:20:50.836399 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836410 | orchestrator | "", 2026-03-02 00:20:50.836421 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-02 00:20:50.836433 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-02 00:20:50.836444 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836455 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-02 00:20:50.836465 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836476 | orchestrator | "", 2026-03-02 00:20:50.836494 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-02 00:20:50.836505 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-02 00:20:50.836518 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836529 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-02 00:20:50.836540 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836551 | orchestrator | "", 2026-03-02 00:20:50.836562 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-02 00:20:50.836572 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-02 00:20:50.836583 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836593 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-02 00:20:50.836604 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836614 | orchestrator | "", 2026-03-02 00:20:50.836624 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-02 00:20:50.836635 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836645 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836657 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836668 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836678 | orchestrator | "", 2026-03-02 00:20:50.836688 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-02 00:20:50.836699 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836710 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836720 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836731 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836742 | orchestrator | "", 2026-03-02 00:20:50.836752 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-02 00:20:50.836763 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836773 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836783 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836794 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836804 | orchestrator | "", 2026-03-02 00:20:50.836815 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-02 00:20:50.836825 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836835 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836846 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836856 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836874 | orchestrator | "", 2026-03-02 00:20:50.836884 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-02 00:20:50.836915 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836927 | orchestrator | " Enabled: true", 2026-03-02 00:20:50.836937 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-02 00:20:50.836948 | orchestrator | " Status: ✅ MATCH", 2026-03-02 00:20:50.836955 | orchestrator | "", 2026-03-02 00:20:50.836961 | orchestrator | "=== Summary ===", 2026-03-02 00:20:50.836968 | orchestrator | "Errors (version mismatches): 0", 2026-03-02 00:20:50.836974 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-02 00:20:50.836980 | orchestrator | "", 2026-03-02 00:20:50.836987 | orchestrator | "✅ All running containers match expected versions!" 2026-03-02 00:20:50.836993 | orchestrator | ] 2026-03-02 00:20:50.837000 | orchestrator | } 2026-03-02 00:20:50.837007 | orchestrator | 2026-03-02 00:20:50.837014 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-02 00:20:50.880250 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:20:50.880340 | orchestrator | 2026-03-02 00:20:50.880355 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:20:50.880369 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-02 00:20:50.880381 | orchestrator | 2026-03-02 00:20:50.975491 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-02 00:20:50.975598 | orchestrator | + deactivate 2026-03-02 00:20:50.975616 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-02 00:20:50.975631 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-02 00:20:50.975643 | orchestrator | + export PATH 2026-03-02 00:20:50.975655 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-02 00:20:50.975666 | orchestrator | + '[' -n '' ']' 2026-03-02 00:20:50.975678 | orchestrator | + hash -r 2026-03-02 00:20:50.975689 | orchestrator | + '[' -n '' ']' 2026-03-02 00:20:50.975701 | orchestrator | + unset VIRTUAL_ENV 2026-03-02 00:20:50.975712 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-02 00:20:50.975723 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-02 00:20:50.975734 | orchestrator | + unset -f deactivate 2026-03-02 00:20:50.975745 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-02 00:20:50.990818 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-02 00:20:50.990888 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-02 00:20:50.990904 | orchestrator | + local max_attempts=60 2026-03-02 00:20:50.990918 | orchestrator | + local name=ceph-ansible 2026-03-02 00:20:50.990929 | orchestrator | + local attempt_num=1 2026-03-02 00:20:50.991248 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:20:51.018617 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:20:51.018694 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-02 00:20:51.018707 | orchestrator | + local max_attempts=60 2026-03-02 00:20:51.018719 | orchestrator | + local name=kolla-ansible 2026-03-02 00:20:51.018731 | orchestrator | + local attempt_num=1 2026-03-02 00:20:51.019532 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-02 00:20:51.058781 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:20:51.058867 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-02 00:20:51.058882 | orchestrator | + local max_attempts=60 2026-03-02 00:20:51.058895 | orchestrator | + local name=osism-ansible 2026-03-02 00:20:51.058907 | orchestrator | + local attempt_num=1 2026-03-02 00:20:51.058918 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-02 00:20:51.092738 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:20:51.092821 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-02 00:20:51.092842 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-02 00:20:51.727670 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-02 00:20:51.867764 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-02 00:20:51.867919 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-02 00:20:51.867940 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-02 00:20:51.867953 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-02 00:20:51.867965 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-02 00:20:51.867976 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-02 00:20:51.867987 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-02 00:20:51.867999 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 57 seconds (healthy) 2026-03-02 00:20:51.868027 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-02 00:20:51.868039 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-02 00:20:51.868050 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-02 00:20:51.868061 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-02 00:20:51.868072 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-02 00:20:51.868083 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-02 00:20:51.868094 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-02 00:20:51.868105 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-02 00:20:51.872649 | orchestrator | ++ semver latest 7.0.0 2026-03-02 00:20:51.912399 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-02 00:20:51.912490 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-02 00:20:51.912507 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-02 00:20:51.916486 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-02 00:21:04.099433 | orchestrator | 2026-03-02 00:21:04 | INFO  | Prepare task for execution of resolvconf. 2026-03-02 00:21:04.294334 | orchestrator | 2026-03-02 00:21:04 | INFO  | Task 3880b39e-8fe6-492b-9f91-5f586b30e7cc (resolvconf) was prepared for execution. 2026-03-02 00:21:04.294429 | orchestrator | 2026-03-02 00:21:04 | INFO  | It takes a moment until task 3880b39e-8fe6-492b-9f91-5f586b30e7cc (resolvconf) has been started and output is visible here. 2026-03-02 00:21:19.914600 | orchestrator | 2026-03-02 00:21:19.914705 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-02 00:21:19.914722 | orchestrator | 2026-03-02 00:21:19.914734 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-02 00:21:19.914746 | orchestrator | Monday 02 March 2026 00:21:08 +0000 (0:00:00.137) 0:00:00.137 ********** 2026-03-02 00:21:19.914763 | orchestrator | ok: [testbed-manager] 2026-03-02 00:21:19.914783 | orchestrator | 2026-03-02 00:21:19.914800 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-02 00:21:19.914820 | orchestrator | Monday 02 March 2026 00:21:13 +0000 (0:00:04.678) 0:00:04.815 ********** 2026-03-02 00:21:19.914840 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:21:19.914859 | orchestrator | 2026-03-02 00:21:19.914873 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-02 00:21:19.914884 | orchestrator | Monday 02 March 2026 00:21:13 +0000 (0:00:00.053) 0:00:04.869 ********** 2026-03-02 00:21:19.914895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-02 00:21:19.914908 | orchestrator | 2026-03-02 00:21:19.914919 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-02 00:21:19.914930 | orchestrator | Monday 02 March 2026 00:21:13 +0000 (0:00:00.074) 0:00:04.943 ********** 2026-03-02 00:21:19.914951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-02 00:21:19.914963 | orchestrator | 2026-03-02 00:21:19.914975 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-02 00:21:19.914986 | orchestrator | Monday 02 March 2026 00:21:13 +0000 (0:00:00.069) 0:00:05.013 ********** 2026-03-02 00:21:19.914997 | orchestrator | ok: [testbed-manager] 2026-03-02 00:21:19.915008 | orchestrator | 2026-03-02 00:21:19.915020 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-02 00:21:19.915031 | orchestrator | Monday 02 March 2026 00:21:14 +0000 (0:00:01.068) 0:00:06.081 ********** 2026-03-02 00:21:19.915043 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:21:19.915054 | orchestrator | 2026-03-02 00:21:19.915065 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-02 00:21:19.915077 | orchestrator | Monday 02 March 2026 00:21:14 +0000 (0:00:00.067) 0:00:06.149 ********** 2026-03-02 00:21:19.915088 | orchestrator | ok: [testbed-manager] 2026-03-02 00:21:19.915099 | orchestrator | 2026-03-02 00:21:19.915154 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-02 00:21:19.915168 | orchestrator | Monday 02 March 2026 00:21:14 +0000 (0:00:00.490) 0:00:06.639 ********** 2026-03-02 00:21:19.915180 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:21:19.915193 | orchestrator | 2026-03-02 00:21:19.915207 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-02 00:21:19.915221 | orchestrator | Monday 02 March 2026 00:21:15 +0000 (0:00:00.088) 0:00:06.728 ********** 2026-03-02 00:21:19.915234 | orchestrator | changed: [testbed-manager] 2026-03-02 00:21:19.915247 | orchestrator | 2026-03-02 00:21:19.915261 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-02 00:21:19.915273 | orchestrator | Monday 02 March 2026 00:21:15 +0000 (0:00:00.530) 0:00:07.258 ********** 2026-03-02 00:21:19.915286 | orchestrator | changed: [testbed-manager] 2026-03-02 00:21:19.915299 | orchestrator | 2026-03-02 00:21:19.915313 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-02 00:21:19.915326 | orchestrator | Monday 02 March 2026 00:21:16 +0000 (0:00:01.040) 0:00:08.299 ********** 2026-03-02 00:21:19.915338 | orchestrator | ok: [testbed-manager] 2026-03-02 00:21:19.915351 | orchestrator | 2026-03-02 00:21:19.915387 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-02 00:21:19.915401 | orchestrator | Monday 02 March 2026 00:21:18 +0000 (0:00:01.955) 0:00:10.255 ********** 2026-03-02 00:21:19.915416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-02 00:21:19.915428 | orchestrator | 2026-03-02 00:21:19.915441 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-02 00:21:19.915455 | orchestrator | Monday 02 March 2026 00:21:18 +0000 (0:00:00.087) 0:00:10.342 ********** 2026-03-02 00:21:19.915468 | orchestrator | changed: [testbed-manager] 2026-03-02 00:21:19.915480 | orchestrator | 2026-03-02 00:21:19.915491 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:21:19.915503 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-02 00:21:19.915514 | orchestrator | 2026-03-02 00:21:19.915525 | orchestrator | 2026-03-02 00:21:19.915537 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:21:19.915548 | orchestrator | Monday 02 March 2026 00:21:19 +0000 (0:00:01.093) 0:00:11.436 ********** 2026-03-02 00:21:19.915558 | orchestrator | =============================================================================== 2026-03-02 00:21:19.915569 | orchestrator | Gathering Facts --------------------------------------------------------- 4.68s 2026-03-02 00:21:19.915580 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.96s 2026-03-02 00:21:19.915591 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.09s 2026-03-02 00:21:19.915602 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.07s 2026-03-02 00:21:19.915613 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2026-03-02 00:21:19.915624 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2026-03-02 00:21:19.915654 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2026-03-02 00:21:19.915666 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-03-02 00:21:19.915677 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-03-02 00:21:19.915688 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-03-02 00:21:19.915699 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-02 00:21:19.915710 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-03-02 00:21:19.915721 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-03-02 00:21:20.198378 | orchestrator | + osism apply sshconfig 2026-03-02 00:21:32.138576 | orchestrator | 2026-03-02 00:21:32 | INFO  | Prepare task for execution of sshconfig. 2026-03-02 00:21:32.207330 | orchestrator | 2026-03-02 00:21:32 | INFO  | Task 023be42c-6b9e-49b0-8811-8d2163cba585 (sshconfig) was prepared for execution. 2026-03-02 00:21:32.207387 | orchestrator | 2026-03-02 00:21:32 | INFO  | It takes a moment until task 023be42c-6b9e-49b0-8811-8d2163cba585 (sshconfig) has been started and output is visible here. 2026-03-02 00:21:42.743452 | orchestrator | 2026-03-02 00:21:42.743569 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-02 00:21:42.743586 | orchestrator | 2026-03-02 00:21:42.743599 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-02 00:21:42.743611 | orchestrator | Monday 02 March 2026 00:21:36 +0000 (0:00:00.118) 0:00:00.118 ********** 2026-03-02 00:21:42.743622 | orchestrator | ok: [testbed-manager] 2026-03-02 00:21:42.743635 | orchestrator | 2026-03-02 00:21:42.743646 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-02 00:21:42.743657 | orchestrator | Monday 02 March 2026 00:21:36 +0000 (0:00:00.500) 0:00:00.619 ********** 2026-03-02 00:21:42.743695 | orchestrator | changed: [testbed-manager] 2026-03-02 00:21:42.743707 | orchestrator | 2026-03-02 00:21:42.743719 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-02 00:21:42.743730 | orchestrator | Monday 02 March 2026 00:21:37 +0000 (0:00:00.473) 0:00:01.092 ********** 2026-03-02 00:21:42.743741 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-02 00:21:42.743753 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-02 00:21:42.743764 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-02 00:21:42.743775 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-02 00:21:42.743786 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-02 00:21:42.743797 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-02 00:21:42.743808 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-02 00:21:42.743819 | orchestrator | 2026-03-02 00:21:42.743830 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-02 00:21:42.743841 | orchestrator | Monday 02 March 2026 00:21:41 +0000 (0:00:04.925) 0:00:06.017 ********** 2026-03-02 00:21:42.743853 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:21:42.743864 | orchestrator | 2026-03-02 00:21:42.743875 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-02 00:21:42.743886 | orchestrator | Monday 02 March 2026 00:21:42 +0000 (0:00:00.057) 0:00:06.075 ********** 2026-03-02 00:21:42.743897 | orchestrator | changed: [testbed-manager] 2026-03-02 00:21:42.743908 | orchestrator | 2026-03-02 00:21:42.743920 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:21:42.743933 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:21:42.743945 | orchestrator | 2026-03-02 00:21:42.743956 | orchestrator | 2026-03-02 00:21:42.743985 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:21:42.744000 | orchestrator | Monday 02 March 2026 00:21:42 +0000 (0:00:00.527) 0:00:06.602 ********** 2026-03-02 00:21:42.744018 | orchestrator | =============================================================================== 2026-03-02 00:21:42.744035 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.93s 2026-03-02 00:21:42.744054 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.53s 2026-03-02 00:21:42.744073 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.50s 2026-03-02 00:21:42.744091 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.47s 2026-03-02 00:21:42.744130 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2026-03-02 00:21:42.990472 | orchestrator | + osism apply known-hosts 2026-03-02 00:21:54.872016 | orchestrator | 2026-03-02 00:21:54 | INFO  | Prepare task for execution of known-hosts. 2026-03-02 00:21:54.938747 | orchestrator | 2026-03-02 00:21:54 | INFO  | Task f5fd934b-30ab-4474-971e-11a66d53ba00 (known-hosts) was prepared for execution. 2026-03-02 00:21:54.938846 | orchestrator | 2026-03-02 00:21:54 | INFO  | It takes a moment until task f5fd934b-30ab-4474-971e-11a66d53ba00 (known-hosts) has been started and output is visible here. 2026-03-02 00:22:09.654824 | orchestrator | 2026-03-02 00:22:09.654953 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-02 00:22:09.654979 | orchestrator | 2026-03-02 00:22:09.654998 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-02 00:22:09.655017 | orchestrator | Monday 02 March 2026 00:21:58 +0000 (0:00:00.117) 0:00:00.117 ********** 2026-03-02 00:22:09.655036 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-02 00:22:09.655055 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-02 00:22:09.655073 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-02 00:22:09.655167 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-02 00:22:09.655189 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-02 00:22:09.655208 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-02 00:22:09.655225 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-02 00:22:09.655243 | orchestrator | 2026-03-02 00:22:09.655263 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-02 00:22:09.655282 | orchestrator | Monday 02 March 2026 00:22:04 +0000 (0:00:05.654) 0:00:05.771 ********** 2026-03-02 00:22:09.655317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-02 00:22:09.655341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-02 00:22:09.655362 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-02 00:22:09.655383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-02 00:22:09.655404 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-02 00:22:09.655424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-02 00:22:09.655444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-02 00:22:09.655464 | orchestrator | 2026-03-02 00:22:09.655485 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:09.655506 | orchestrator | Monday 02 March 2026 00:22:04 +0000 (0:00:00.166) 0:00:05.938 ********** 2026-03-02 00:22:09.655531 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSJNDGt7Zl72G1RwNaCYLm18SZdCLxJvmfo6ChpQ0RN4amBjo7PJAXYCuHsnrytYnTRkOCrvFiyk1jq4ax8Zpq5qgkedAjVkK78YoYB9Qs+8/P+nZYfr/pzw74WDz4tgr3NqMSCajzvCxQHDd6XZOmvlUrJhgBNZtQodH3Js2rCfqsoe2rPAgok4FUmHa1g82jMER80xlTwur4T20dP4XCfb3oTpUYbMz+NeZmUU95HqOLyRmmj/fBSyhGgop3qVgG34Dt5LwU0sFoNhoHLoCzEDM+pq3eQKN/oHKWgJPkkanIPdWwhMTeWyi6Ozqq7dpLl/G86hReXbGZ4bY6Of8kErcbr/BtG0vFhWLSYpHO6RJnAB4iV+WwTiyqCSPtYCfYfh3vfVQyR74Ln47ZO8Qruv25KfGNfAF/bmhv3/tU2ZnfT4epmpdAbw6s6Ht+O/mxsf9+SdhBLPLsXuO/txOZ1GossFn4Q4VBHl//NIHpkJw6ZInB3LjpF3jGlSDiTVU=) 2026-03-02 00:22:09.655556 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE0QysPhLsWTTa8p0NqYky3NVwgB4V3Kg5HQhoVSBDUG) 2026-03-02 00:22:09.655579 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKKylkNloYYkgxUs3zDQaopL9vAltMNVn+SzD9KUdhPYn5u8njgcr40UrmAlMV5T7Y0TvJejA/sgxNRwptH/qT0=) 2026-03-02 00:22:09.655601 | orchestrator | 2026-03-02 00:22:09.655621 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:09.655642 | orchestrator | Monday 02 March 2026 00:22:05 +0000 (0:00:01.070) 0:00:07.009 ********** 2026-03-02 00:22:09.655663 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIrtqxAAFYEpIrcDSduhUqOR40qJelU14XXSCkluPMIKi5G87A6bLQ0GgeMtqWJyjIWljq+QiA+ACmK5KSNpKkM=) 2026-03-02 00:22:09.655685 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHq7ZZ7t1jveSxfJAbuHUIiObhIytDU2OYwp353PyzlH) 2026-03-02 00:22:09.655756 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPLcs2YvJGBdui/HWHjf8lAzMREuFIcqQfeWciyTiCvNTe0AqbxQ6LpNUDfB4Ee3lbwwCcZ/pAyE/VQhwOF/8QZ/Z7zQdh7ppREdYjwCIdnj3J6BvRVi+huyW7Cvcej7UqywPeKxO1LHgYwl493y8SnKibWOx7KWa1s2QBIZn/09Yilwfvune4LRS78VdeFtwq957ZBmdCaP4+7dcENRs0jlOQFYpzGs9Ak6haMjqAd4KcpzlQfTS7EiPfEy1bfsGpRRmBfP6HFxm7Ff/fFUo8jWknPdRFPXWnYX21Is2Q5u8Nuz9m8/jOLKd8W2x+vE6MZfOBm1hv35iyl36PBxeK2SutGK36g9HndlQO3LfJHaaWfoi8dl0agdBvZe4XHvRveBVux9yX9fRYtgqOb7HnqHYHL54WUZZjiesIzgs0FuNFn6++vZOo/RG8zqDWVwBI/e5LA9FGWiw1Was3kFjXXWYs6CJ3NaGx4rVCmeFk8CvLElIvAs4vqVUNGV80AOc=) 2026-03-02 00:22:09.655779 | orchestrator | 2026-03-02 00:22:09.655798 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:09.655816 | orchestrator | Monday 02 March 2026 00:22:06 +0000 (0:00:00.990) 0:00:08.000 ********** 2026-03-02 00:22:09.655834 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGtTsBDYt6o1/tENzqJIm4ICFHfxaYkiy3BQQu0jqkwzbEBASqzEeK/EtJyLX8zkzF02/0WpXJb+JKzK4CbTvzg=) 2026-03-02 00:22:09.655853 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRYliI+nMyofFclr32tjnULQg4p7spwftgWwkAPchU3HHtTxNxrdEdKEUef5+5sw2LF+32aHnmlTL55Ru+lOXjCqz2zEHw3wud9fjHRkOtR73Pp5OywFebLI6UVMMim8qltb58Iglw8x4+4WwKAkIhpGQ6b1JbBsLUZwReViQYBND7XQNc2AvsjEnkdxlHsB3JOk6HiRTj+0pi58NR7WzxThIPhT/zEGn0zQFU07g0DrT+xpKGlbCtBVHKCfMYAYfga0Zj0Ix3Z/+3W8Kv4KrgfuY8J3UZiI5kqP5iSGAfFBIt/fH4baW23eRqTqurqRz9ex9yEZ2M8b1/0lVkqjd5FAacTc6xqI/GIyjBuyfcE/2G/xoZv6XEt7oMnCChk3LuntH6caQ1G9oyiE2UXGjLd9bc4yOH3sj2+qpROwolqVeOjie5PaMuWXqx/WBoCPBIqlorhaw2wHO1Rnq+MKSxL2nRabNEVVvOE6Y6/jZLdhdjb9brc5DbZoifa+Q/YcU=) 2026-03-02 00:22:09.655963 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFME3BMhrCQKbDW7gSx/aFTG4/PjwcTNRQ9ogPDv+4/e) 2026-03-02 00:22:09.655988 | orchestrator | 2026-03-02 00:22:09.656006 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:09.656025 | orchestrator | Monday 02 March 2026 00:22:07 +0000 (0:00:00.969) 0:00:08.969 ********** 2026-03-02 00:22:09.656052 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFLK6O+qHt5yJyH3PMHBprmCqACXU0S2d+LHEWWkv0zP4rAL7WiYVK0QuZpL3dp8AJwzuampSHq799G35Yvh7mI=) 2026-03-02 00:22:09.656073 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0QreecMugxLR3491tztIKFIUPg2Vb6WcKoyYPwmY5zx59tC4vnO1xKWYit9ljOeZyQtmHTxRFkUV9Q8jkBrsBui/HftEhs57o3VyIz2u7Kj1l8xu9mPQCcOpNH2sUzfBdA9RXPJLfINfX1I6kxb9o5CyJiLaa5rIeCTmqntKli8TJiDbHiuDwte3vCOmHKbdye/zB9Bl0dbtR0fu5VhGCKYoV+yimT98LZinLBGpk529FeVZCnzzV2sTKYxeWHOIkmA47147x00z3uvtoX2sbDQQckeLCOnj66vWxRmeq4iuOCAuMrW7TcWxIW0I/9XwRqMO8v3NLAZyjH+DUAwKRsGxP8KqB4F7alrCNGYobXrwC8AxxWmSwbZZQcF2TXiQbbiqtO4ygsgRd/3owO+yXhBO+8iXVD4ZlGOd6M/fdTJV7779PIYI5HGXTI3wUgx2lMOW9Alm0jzkGH7rrMCD95hL76IkiYPKXHvrMfbjlSGT581HO3Ro/2PiB/symbcU=) 2026-03-02 00:22:09.656126 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJhtn+Val53WyUw7R5VbF+xR2qhVqiqekjwl9dTUwFV6) 2026-03-02 00:22:09.656144 | orchestrator | 2026-03-02 00:22:09.656156 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:09.656166 | orchestrator | Monday 02 March 2026 00:22:08 +0000 (0:00:00.983) 0:00:09.953 ********** 2026-03-02 00:22:09.656178 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDFf6Tbf5Rr2fivqqvxryExEvaYcBmDflOTPBpKJ7Ms8+anxQv82/NdqqpNEz/ygvFNwry0+qcKI7NyzojFLBhe+lSRunkntq1g78sl8uIzZh2MZsY/NqigTwQQUXU+ORBZ5cGg35rfsbyyC08m/NdEsmSTP6DRcWN3WyGyokP78oXfGPLpOD750kRhFC065uh6l1mwKqBbUkyKITGK3L/YxHLD+nnkrElpYWGXAF7C/oyhuG+AOHsPI5kJHzfzegma0DF2qe7rH8p4wTQXtQb2BPsux7MARCxZQ6myvoQP49ojwWBItnqHbQoj031BcT1STehPBd1TO/vc5BeKOwawC8n3oWmxckLIaAN3IYh9nCMg8vOPDlvPeY0ahqKDQeFCqjfDD0yQybgOcOP9G8ydSLvnNXGQdN9d9GYWkOpGvRG8fqc8sW3tY8iw2bU68s5nuB5we0f5eNY57ylqThZPBH2tEgkhsM4BQhXzsofXnSm3k4N2z5lROvKsmidci0=) 2026-03-02 00:22:09.656200 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKq8/3mdUgL5ujWVTwa6HkfvNoSVyD0Xxrj2gmfTXM2ZxcYRA9BULynrJVKwaUHpXMQxqleYWV0LwyvTGBjnHnM=) 2026-03-02 00:22:09.656212 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP9Ixz4DumYAbPJ9v7lseI5ClN8x/4rT1yT7HuK1MsWm) 2026-03-02 00:22:09.656223 | orchestrator | 2026-03-02 00:22:09.656234 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:09.656245 | orchestrator | Monday 02 March 2026 00:22:09 +0000 (0:00:01.009) 0:00:10.963 ********** 2026-03-02 00:22:09.656271 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9y8NmGn6IKxpyM4yLjIlIwhYrlwnvWwce+Bp+su5FXcFlkFansHMmQqI+NfDfa0RraaTi48X/oEe2/8hskQ8Vlbj4tnI7GNZ1ermDzRVtR6zPzbisrDb5Qn4HgSn5vDshI/YGYbm+MGnAoDjY5e/w++VPk79F8gdbsnepwDWrhWajV5RhKe3G0Gh8vTSyDKYUM9mGtNmasUxfvagN391e65LM0x/9ibEx52W3hU+cK2vW5QkSqqUAqBPvCAWt/q2Qc/jf3oUyLQRQw/bF//ty+37HpO/rbMtBMZZiQAajcOhzzPOHjIO7Ps23DFTUjSDzIohwoR8y6CmLA+ry/JBPpeDSpNs3ynru/toSeBV9kSl21jsgkCFtTlDRmEsEvq8oevt5e1kiGsJAdbFi//Pjf1t4q5jO9/MFletO/4B4zIjH4kLIV2mAKhfmGarIvLNKM5NeyoK8BJUbqZuQc8o5Q9if2cgLjcQF5PIjffMOXiGgub7jvAnptCxSr5eB6/0=) 2026-03-02 00:22:20.761373 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGnoJ5z4PIDSnVW1A/hdHinHZrPy6ZCqmINJ9r6eMo5ml++yeR26qK6qYRHJeZBMYLKciXgfscA3z3x6qz7X0qs=) 2026-03-02 00:22:20.761483 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILTZWHkirwBr/QjX3CZL/79UJIvIIxWkTESHJJgOqeRd) 2026-03-02 00:22:20.761500 | orchestrator | 2026-03-02 00:22:20.761512 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:20.761524 | orchestrator | Monday 02 March 2026 00:22:10 +0000 (0:00:01.045) 0:00:12.008 ********** 2026-03-02 00:22:20.761535 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnGahJph3JM4yGQrC6S/Ud7k5ECroydlX+WIks9395q/XiNHHCgu+mogeQjaUlE+AyyNR6I354oVEzF+UIU2XE=) 2026-03-02 00:22:20.761545 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBGhQiCWpLqMQzV1FqZtJPzB87OOwV1iDx6Syjho6wtJ) 2026-03-02 00:22:20.761558 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs3mN6kC5ewn0VPFMsjd9q7QULNx1L4beMy1tAftg1jHXANsuif9+Ahand2mI78VShvk7LHl6eodHvOLsv4vesyEtONjMPZkdB61DMIhw1z5MgKZjrdJHlrwstisRAzXOByG3/3AfF6X2MP8Ggu95eYRHyBAwA8SyEtRZzC0vhmxo/EEz0KB/sCATcK/a1UYtkW9yGI3n0/pgTRupl2bMch5hjC2UjeuR8T6XHOx85ezLiTY8Y3cmDShLJhuDsu87OMm6W0LpX8ecbhDLOu2co8u1YKPIGEmVUiFDQZ0YsM+SmpV6UVj6ff7Vf0uqhfF5v3/gfWdFADT9y+1kopxHTfYJi6j5qSV4l9760AwFZ3LDjaEFre1yLKR226+fS//18BOdJsf0MS6/wKh9+cePsFHNC0yROkRERsHhHK33Jqs5kzo+FqMnjksCzV4NARUOi84OvBsVqnAzpW93oJB0NBOMMbP+7cUGYYnRyh5sP5GQ4isnowgAR7Xmz1FD1A/M=) 2026-03-02 00:22:20.761571 | orchestrator | 2026-03-02 00:22:20.761581 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-02 00:22:20.761592 | orchestrator | Monday 02 March 2026 00:22:11 +0000 (0:00:01.057) 0:00:13.066 ********** 2026-03-02 00:22:20.761603 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-02 00:22:20.761614 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-02 00:22:20.761624 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-02 00:22:20.761640 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-02 00:22:20.761657 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-02 00:22:20.761693 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-02 00:22:20.761711 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-02 00:22:20.761751 | orchestrator | 2026-03-02 00:22:20.761762 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-02 00:22:20.761773 | orchestrator | Monday 02 March 2026 00:22:16 +0000 (0:00:05.191) 0:00:18.257 ********** 2026-03-02 00:22:20.761784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-02 00:22:20.761796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-02 00:22:20.761806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-02 00:22:20.761816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-02 00:22:20.761826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-02 00:22:20.761835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-02 00:22:20.761845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-02 00:22:20.761854 | orchestrator | 2026-03-02 00:22:20.761864 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:20.761874 | orchestrator | Monday 02 March 2026 00:22:16 +0000 (0:00:00.171) 0:00:18.429 ********** 2026-03-02 00:22:20.761911 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSJNDGt7Zl72G1RwNaCYLm18SZdCLxJvmfo6ChpQ0RN4amBjo7PJAXYCuHsnrytYnTRkOCrvFiyk1jq4ax8Zpq5qgkedAjVkK78YoYB9Qs+8/P+nZYfr/pzw74WDz4tgr3NqMSCajzvCxQHDd6XZOmvlUrJhgBNZtQodH3Js2rCfqsoe2rPAgok4FUmHa1g82jMER80xlTwur4T20dP4XCfb3oTpUYbMz+NeZmUU95HqOLyRmmj/fBSyhGgop3qVgG34Dt5LwU0sFoNhoHLoCzEDM+pq3eQKN/oHKWgJPkkanIPdWwhMTeWyi6Ozqq7dpLl/G86hReXbGZ4bY6Of8kErcbr/BtG0vFhWLSYpHO6RJnAB4iV+WwTiyqCSPtYCfYfh3vfVQyR74Ln47ZO8Qruv25KfGNfAF/bmhv3/tU2ZnfT4epmpdAbw6s6Ht+O/mxsf9+SdhBLPLsXuO/txOZ1GossFn4Q4VBHl//NIHpkJw6ZInB3LjpF3jGlSDiTVU=) 2026-03-02 00:22:20.761925 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKKylkNloYYkgxUs3zDQaopL9vAltMNVn+SzD9KUdhPYn5u8njgcr40UrmAlMV5T7Y0TvJejA/sgxNRwptH/qT0=) 2026-03-02 00:22:20.761938 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE0QysPhLsWTTa8p0NqYky3NVwgB4V3Kg5HQhoVSBDUG) 2026-03-02 00:22:20.761950 | orchestrator | 2026-03-02 00:22:20.761961 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:20.761972 | orchestrator | Monday 02 March 2026 00:22:17 +0000 (0:00:01.035) 0:00:19.464 ********** 2026-03-02 00:22:20.761984 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPLcs2YvJGBdui/HWHjf8lAzMREuFIcqQfeWciyTiCvNTe0AqbxQ6LpNUDfB4Ee3lbwwCcZ/pAyE/VQhwOF/8QZ/Z7zQdh7ppREdYjwCIdnj3J6BvRVi+huyW7Cvcej7UqywPeKxO1LHgYwl493y8SnKibWOx7KWa1s2QBIZn/09Yilwfvune4LRS78VdeFtwq957ZBmdCaP4+7dcENRs0jlOQFYpzGs9Ak6haMjqAd4KcpzlQfTS7EiPfEy1bfsGpRRmBfP6HFxm7Ff/fFUo8jWknPdRFPXWnYX21Is2Q5u8Nuz9m8/jOLKd8W2x+vE6MZfOBm1hv35iyl36PBxeK2SutGK36g9HndlQO3LfJHaaWfoi8dl0agdBvZe4XHvRveBVux9yX9fRYtgqOb7HnqHYHL54WUZZjiesIzgs0FuNFn6++vZOo/RG8zqDWVwBI/e5LA9FGWiw1Was3kFjXXWYs6CJ3NaGx4rVCmeFk8CvLElIvAs4vqVUNGV80AOc=) 2026-03-02 00:22:20.762004 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIrtqxAAFYEpIrcDSduhUqOR40qJelU14XXSCkluPMIKi5G87A6bLQ0GgeMtqWJyjIWljq+QiA+ACmK5KSNpKkM=) 2026-03-02 00:22:20.762064 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHq7ZZ7t1jveSxfJAbuHUIiObhIytDU2OYwp353PyzlH) 2026-03-02 00:22:20.762076 | orchestrator | 2026-03-02 00:22:20.762131 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:20.762143 | orchestrator | Monday 02 March 2026 00:22:18 +0000 (0:00:01.015) 0:00:20.479 ********** 2026-03-02 00:22:20.762153 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFME3BMhrCQKbDW7gSx/aFTG4/PjwcTNRQ9ogPDv+4/e) 2026-03-02 00:22:20.762163 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRYliI+nMyofFclr32tjnULQg4p7spwftgWwkAPchU3HHtTxNxrdEdKEUef5+5sw2LF+32aHnmlTL55Ru+lOXjCqz2zEHw3wud9fjHRkOtR73Pp5OywFebLI6UVMMim8qltb58Iglw8x4+4WwKAkIhpGQ6b1JbBsLUZwReViQYBND7XQNc2AvsjEnkdxlHsB3JOk6HiRTj+0pi58NR7WzxThIPhT/zEGn0zQFU07g0DrT+xpKGlbCtBVHKCfMYAYfga0Zj0Ix3Z/+3W8Kv4KrgfuY8J3UZiI5kqP5iSGAfFBIt/fH4baW23eRqTqurqRz9ex9yEZ2M8b1/0lVkqjd5FAacTc6xqI/GIyjBuyfcE/2G/xoZv6XEt7oMnCChk3LuntH6caQ1G9oyiE2UXGjLd9bc4yOH3sj2+qpROwolqVeOjie5PaMuWXqx/WBoCPBIqlorhaw2wHO1Rnq+MKSxL2nRabNEVVvOE6Y6/jZLdhdjb9brc5DbZoifa+Q/YcU=) 2026-03-02 00:22:20.762174 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGtTsBDYt6o1/tENzqJIm4ICFHfxaYkiy3BQQu0jqkwzbEBASqzEeK/EtJyLX8zkzF02/0WpXJb+JKzK4CbTvzg=) 2026-03-02 00:22:20.762184 | orchestrator | 2026-03-02 00:22:20.762194 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:20.762203 | orchestrator | Monday 02 March 2026 00:22:19 +0000 (0:00:00.983) 0:00:21.462 ********** 2026-03-02 00:22:20.762213 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJhtn+Val53WyUw7R5VbF+xR2qhVqiqekjwl9dTUwFV6) 2026-03-02 00:22:20.762230 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0QreecMugxLR3491tztIKFIUPg2Vb6WcKoyYPwmY5zx59tC4vnO1xKWYit9ljOeZyQtmHTxRFkUV9Q8jkBrsBui/HftEhs57o3VyIz2u7Kj1l8xu9mPQCcOpNH2sUzfBdA9RXPJLfINfX1I6kxb9o5CyJiLaa5rIeCTmqntKli8TJiDbHiuDwte3vCOmHKbdye/zB9Bl0dbtR0fu5VhGCKYoV+yimT98LZinLBGpk529FeVZCnzzV2sTKYxeWHOIkmA47147x00z3uvtoX2sbDQQckeLCOnj66vWxRmeq4iuOCAuMrW7TcWxIW0I/9XwRqMO8v3NLAZyjH+DUAwKRsGxP8KqB4F7alrCNGYobXrwC8AxxWmSwbZZQcF2TXiQbbiqtO4ygsgRd/3owO+yXhBO+8iXVD4ZlGOd6M/fdTJV7779PIYI5HGXTI3wUgx2lMOW9Alm0jzkGH7rrMCD95hL76IkiYPKXHvrMfbjlSGT581HO3Ro/2PiB/symbcU=) 2026-03-02 00:22:20.762251 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFLK6O+qHt5yJyH3PMHBprmCqACXU0S2d+LHEWWkv0zP4rAL7WiYVK0QuZpL3dp8AJwzuampSHq799G35Yvh7mI=) 2026-03-02 00:22:24.974137 | orchestrator | 2026-03-02 00:22:24.974270 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:24.974296 | orchestrator | Monday 02 March 2026 00:22:20 +0000 (0:00:01.002) 0:00:22.465 ********** 2026-03-02 00:22:24.974316 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKq8/3mdUgL5ujWVTwa6HkfvNoSVyD0Xxrj2gmfTXM2ZxcYRA9BULynrJVKwaUHpXMQxqleYWV0LwyvTGBjnHnM=) 2026-03-02 00:22:24.974342 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDFf6Tbf5Rr2fivqqvxryExEvaYcBmDflOTPBpKJ7Ms8+anxQv82/NdqqpNEz/ygvFNwry0+qcKI7NyzojFLBhe+lSRunkntq1g78sl8uIzZh2MZsY/NqigTwQQUXU+ORBZ5cGg35rfsbyyC08m/NdEsmSTP6DRcWN3WyGyokP78oXfGPLpOD750kRhFC065uh6l1mwKqBbUkyKITGK3L/YxHLD+nnkrElpYWGXAF7C/oyhuG+AOHsPI5kJHzfzegma0DF2qe7rH8p4wTQXtQb2BPsux7MARCxZQ6myvoQP49ojwWBItnqHbQoj031BcT1STehPBd1TO/vc5BeKOwawC8n3oWmxckLIaAN3IYh9nCMg8vOPDlvPeY0ahqKDQeFCqjfDD0yQybgOcOP9G8ydSLvnNXGQdN9d9GYWkOpGvRG8fqc8sW3tY8iw2bU68s5nuB5we0f5eNY57ylqThZPBH2tEgkhsM4BQhXzsofXnSm3k4N2z5lROvKsmidci0=) 2026-03-02 00:22:24.974403 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP9Ixz4DumYAbPJ9v7lseI5ClN8x/4rT1yT7HuK1MsWm) 2026-03-02 00:22:24.974425 | orchestrator | 2026-03-02 00:22:24.974441 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:24.974467 | orchestrator | Monday 02 March 2026 00:22:21 +0000 (0:00:00.998) 0:00:23.464 ********** 2026-03-02 00:22:24.974480 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9y8NmGn6IKxpyM4yLjIlIwhYrlwnvWwce+Bp+su5FXcFlkFansHMmQqI+NfDfa0RraaTi48X/oEe2/8hskQ8Vlbj4tnI7GNZ1ermDzRVtR6zPzbisrDb5Qn4HgSn5vDshI/YGYbm+MGnAoDjY5e/w++VPk79F8gdbsnepwDWrhWajV5RhKe3G0Gh8vTSyDKYUM9mGtNmasUxfvagN391e65LM0x/9ibEx52W3hU+cK2vW5QkSqqUAqBPvCAWt/q2Qc/jf3oUyLQRQw/bF//ty+37HpO/rbMtBMZZiQAajcOhzzPOHjIO7Ps23DFTUjSDzIohwoR8y6CmLA+ry/JBPpeDSpNs3ynru/toSeBV9kSl21jsgkCFtTlDRmEsEvq8oevt5e1kiGsJAdbFi//Pjf1t4q5jO9/MFletO/4B4zIjH4kLIV2mAKhfmGarIvLNKM5NeyoK8BJUbqZuQc8o5Q9if2cgLjcQF5PIjffMOXiGgub7jvAnptCxSr5eB6/0=) 2026-03-02 00:22:24.974493 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGnoJ5z4PIDSnVW1A/hdHinHZrPy6ZCqmINJ9r6eMo5ml++yeR26qK6qYRHJeZBMYLKciXgfscA3z3x6qz7X0qs=) 2026-03-02 00:22:24.974504 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILTZWHkirwBr/QjX3CZL/79UJIvIIxWkTESHJJgOqeRd) 2026-03-02 00:22:24.974516 | orchestrator | 2026-03-02 00:22:24.974527 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-02 00:22:24.974538 | orchestrator | Monday 02 March 2026 00:22:22 +0000 (0:00:00.978) 0:00:24.442 ********** 2026-03-02 00:22:24.974549 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnGahJph3JM4yGQrC6S/Ud7k5ECroydlX+WIks9395q/XiNHHCgu+mogeQjaUlE+AyyNR6I354oVEzF+UIU2XE=) 2026-03-02 00:22:24.974561 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs3mN6kC5ewn0VPFMsjd9q7QULNx1L4beMy1tAftg1jHXANsuif9+Ahand2mI78VShvk7LHl6eodHvOLsv4vesyEtONjMPZkdB61DMIhw1z5MgKZjrdJHlrwstisRAzXOByG3/3AfF6X2MP8Ggu95eYRHyBAwA8SyEtRZzC0vhmxo/EEz0KB/sCATcK/a1UYtkW9yGI3n0/pgTRupl2bMch5hjC2UjeuR8T6XHOx85ezLiTY8Y3cmDShLJhuDsu87OMm6W0LpX8ecbhDLOu2co8u1YKPIGEmVUiFDQZ0YsM+SmpV6UVj6ff7Vf0uqhfF5v3/gfWdFADT9y+1kopxHTfYJi6j5qSV4l9760AwFZ3LDjaEFre1yLKR226+fS//18BOdJsf0MS6/wKh9+cePsFHNC0yROkRERsHhHK33Jqs5kzo+FqMnjksCzV4NARUOi84OvBsVqnAzpW93oJB0NBOMMbP+7cUGYYnRyh5sP5GQ4isnowgAR7Xmz1FD1A/M=) 2026-03-02 00:22:24.974575 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBGhQiCWpLqMQzV1FqZtJPzB87OOwV1iDx6Syjho6wtJ) 2026-03-02 00:22:24.974588 | orchestrator | 2026-03-02 00:22:24.974601 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-02 00:22:24.974613 | orchestrator | Monday 02 March 2026 00:22:23 +0000 (0:00:01.040) 0:00:25.483 ********** 2026-03-02 00:22:24.974627 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-02 00:22:24.974640 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-02 00:22:24.974654 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-02 00:22:24.974666 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-02 00:22:24.974678 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-02 00:22:24.974691 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-02 00:22:24.974704 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-02 00:22:24.974717 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:22:24.974731 | orchestrator | 2026-03-02 00:22:24.974763 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-02 00:22:24.974777 | orchestrator | Monday 02 March 2026 00:22:23 +0000 (0:00:00.152) 0:00:25.635 ********** 2026-03-02 00:22:24.974797 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:22:24.974810 | orchestrator | 2026-03-02 00:22:24.974824 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-02 00:22:24.974837 | orchestrator | Monday 02 March 2026 00:22:24 +0000 (0:00:00.046) 0:00:25.682 ********** 2026-03-02 00:22:24.974850 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:22:24.974862 | orchestrator | 2026-03-02 00:22:24.974875 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-02 00:22:24.974889 | orchestrator | Monday 02 March 2026 00:22:24 +0000 (0:00:00.046) 0:00:25.728 ********** 2026-03-02 00:22:24.974902 | orchestrator | changed: [testbed-manager] 2026-03-02 00:22:24.974914 | orchestrator | 2026-03-02 00:22:24.974927 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:22:24.974941 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-02 00:22:24.974955 | orchestrator | 2026-03-02 00:22:24.974967 | orchestrator | 2026-03-02 00:22:24.974978 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:22:24.974989 | orchestrator | Monday 02 March 2026 00:22:24 +0000 (0:00:00.708) 0:00:26.436 ********** 2026-03-02 00:22:24.975000 | orchestrator | =============================================================================== 2026-03-02 00:22:24.975011 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.65s 2026-03-02 00:22:24.975022 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.19s 2026-03-02 00:22:24.975033 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-02 00:22:24.975045 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-02 00:22:24.975055 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-02 00:22:24.975066 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-02 00:22:24.975077 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-02 00:22:24.975124 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-02 00:22:24.975136 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-02 00:22:24.975147 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-02 00:22:24.975159 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-02 00:22:24.975170 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-02 00:22:24.975181 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-02 00:22:24.975199 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-02 00:22:24.975211 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-02 00:22:24.975222 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-03-02 00:22:24.975233 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.71s 2026-03-02 00:22:24.975244 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-02 00:22:24.975255 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-03-02 00:22:24.975266 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-03-02 00:22:25.251036 | orchestrator | + osism apply squid 2026-03-02 00:22:37.248314 | orchestrator | 2026-03-02 00:22:37 | INFO  | Prepare task for execution of squid. 2026-03-02 00:22:37.320644 | orchestrator | 2026-03-02 00:22:37 | INFO  | Task f7f0b8d3-76dc-4e21-808f-7771a56aaa1b (squid) was prepared for execution. 2026-03-02 00:22:37.320745 | orchestrator | 2026-03-02 00:22:37 | INFO  | It takes a moment until task f7f0b8d3-76dc-4e21-808f-7771a56aaa1b (squid) has been started and output is visible here. 2026-03-02 00:24:39.911239 | orchestrator | 2026-03-02 00:24:39.911380 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-02 00:24:39.911411 | orchestrator | 2026-03-02 00:24:39.911434 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-02 00:24:39.911455 | orchestrator | Monday 02 March 2026 00:22:41 +0000 (0:00:00.155) 0:00:00.155 ********** 2026-03-02 00:24:39.911474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-02 00:24:39.911496 | orchestrator | 2026-03-02 00:24:39.911515 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-02 00:24:39.911533 | orchestrator | Monday 02 March 2026 00:22:41 +0000 (0:00:00.079) 0:00:00.235 ********** 2026-03-02 00:24:39.911551 | orchestrator | ok: [testbed-manager] 2026-03-02 00:24:39.911569 | orchestrator | 2026-03-02 00:24:39.911589 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-02 00:24:39.911609 | orchestrator | Monday 02 March 2026 00:22:42 +0000 (0:00:01.354) 0:00:01.589 ********** 2026-03-02 00:24:39.911628 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-02 00:24:39.911647 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-02 00:24:39.911667 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-02 00:24:39.911684 | orchestrator | 2026-03-02 00:24:39.911696 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-02 00:24:39.911707 | orchestrator | Monday 02 March 2026 00:22:43 +0000 (0:00:01.075) 0:00:02.665 ********** 2026-03-02 00:24:39.911718 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-02 00:24:39.911729 | orchestrator | 2026-03-02 00:24:39.911740 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-02 00:24:39.911751 | orchestrator | Monday 02 March 2026 00:22:44 +0000 (0:00:01.002) 0:00:03.668 ********** 2026-03-02 00:24:39.911764 | orchestrator | ok: [testbed-manager] 2026-03-02 00:24:39.911776 | orchestrator | 2026-03-02 00:24:39.911789 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-02 00:24:39.911802 | orchestrator | Monday 02 March 2026 00:22:45 +0000 (0:00:00.347) 0:00:04.015 ********** 2026-03-02 00:24:39.911816 | orchestrator | changed: [testbed-manager] 2026-03-02 00:24:39.911828 | orchestrator | 2026-03-02 00:24:39.911841 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-02 00:24:39.911854 | orchestrator | Monday 02 March 2026 00:22:46 +0000 (0:00:00.870) 0:00:04.885 ********** 2026-03-02 00:24:39.911868 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-02 00:24:39.911881 | orchestrator | ok: [testbed-manager] 2026-03-02 00:24:39.911894 | orchestrator | 2026-03-02 00:24:39.911907 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-02 00:24:39.911920 | orchestrator | Monday 02 March 2026 00:23:25 +0000 (0:00:39.485) 0:00:44.371 ********** 2026-03-02 00:24:39.911932 | orchestrator | changed: [testbed-manager] 2026-03-02 00:24:39.911946 | orchestrator | 2026-03-02 00:24:39.911958 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-02 00:24:39.911992 | orchestrator | Monday 02 March 2026 00:23:38 +0000 (0:00:13.297) 0:00:57.668 ********** 2026-03-02 00:24:39.912006 | orchestrator | Pausing for 60 seconds 2026-03-02 00:24:39.912019 | orchestrator | changed: [testbed-manager] 2026-03-02 00:24:39.912103 | orchestrator | 2026-03-02 00:24:39.912118 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-02 00:24:39.912131 | orchestrator | Monday 02 March 2026 00:24:39 +0000 (0:01:00.106) 0:01:57.775 ********** 2026-03-02 00:24:39.912145 | orchestrator | ok: [testbed-manager] 2026-03-02 00:24:39.912158 | orchestrator | 2026-03-02 00:24:39.912169 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-02 00:24:39.912209 | orchestrator | Monday 02 March 2026 00:24:39 +0000 (0:00:00.075) 0:01:57.851 ********** 2026-03-02 00:24:39.912221 | orchestrator | changed: [testbed-manager] 2026-03-02 00:24:39.912231 | orchestrator | 2026-03-02 00:24:39.912242 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:24:39.912254 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:24:39.912265 | orchestrator | 2026-03-02 00:24:39.912276 | orchestrator | 2026-03-02 00:24:39.912287 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:24:39.912298 | orchestrator | Monday 02 March 2026 00:24:39 +0000 (0:00:00.607) 0:01:58.459 ********** 2026-03-02 00:24:39.912309 | orchestrator | =============================================================================== 2026-03-02 00:24:39.912320 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.11s 2026-03-02 00:24:39.912331 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 39.49s 2026-03-02 00:24:39.912342 | orchestrator | osism.services.squid : Restart squid service --------------------------- 13.30s 2026-03-02 00:24:39.912353 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.35s 2026-03-02 00:24:39.912364 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.08s 2026-03-02 00:24:39.912375 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.00s 2026-03-02 00:24:39.912386 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.87s 2026-03-02 00:24:39.912397 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2026-03-02 00:24:39.912407 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-03-02 00:24:39.912418 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-03-02 00:24:39.912429 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-03-02 00:24:40.194376 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-02 00:24:40.194475 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-02 00:24:40.200212 | orchestrator | + set -e 2026-03-02 00:24:40.200242 | orchestrator | + NAMESPACE=kolla 2026-03-02 00:24:40.200255 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-02 00:24:40.204879 | orchestrator | ++ semver latest 9.0.0 2026-03-02 00:24:40.249807 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-02 00:24:40.249883 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-02 00:24:40.250474 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-02 00:24:52.135898 | orchestrator | 2026-03-02 00:24:52 | INFO  | Prepare task for execution of operator. 2026-03-02 00:24:52.224585 | orchestrator | 2026-03-02 00:24:52 | INFO  | Task ca6740f1-62d6-48a4-9fe5-35ff024f8cf0 (operator) was prepared for execution. 2026-03-02 00:24:52.224655 | orchestrator | 2026-03-02 00:24:52 | INFO  | It takes a moment until task ca6740f1-62d6-48a4-9fe5-35ff024f8cf0 (operator) has been started and output is visible here. 2026-03-02 00:25:08.452115 | orchestrator | 2026-03-02 00:25:08.452232 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-02 00:25:08.452249 | orchestrator | 2026-03-02 00:25:08.452263 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-02 00:25:08.452274 | orchestrator | Monday 02 March 2026 00:24:56 +0000 (0:00:00.128) 0:00:00.128 ********** 2026-03-02 00:25:08.452286 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:25:08.452299 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:25:08.452311 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:25:08.452321 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:25:08.452332 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:25:08.452343 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:25:08.452359 | orchestrator | 2026-03-02 00:25:08.452370 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-02 00:25:08.452407 | orchestrator | Monday 02 March 2026 00:25:00 +0000 (0:00:04.183) 0:00:04.311 ********** 2026-03-02 00:25:08.452419 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:25:08.452430 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:25:08.452441 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:25:08.452452 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:25:08.452463 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:25:08.452474 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:25:08.452484 | orchestrator | 2026-03-02 00:25:08.452495 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-02 00:25:08.452506 | orchestrator | 2026-03-02 00:25:08.452517 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-02 00:25:08.452529 | orchestrator | Monday 02 March 2026 00:25:01 +0000 (0:00:00.823) 0:00:05.134 ********** 2026-03-02 00:25:08.452540 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:25:08.452551 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:25:08.452561 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:25:08.452572 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:25:08.452583 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:25:08.452594 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:25:08.452605 | orchestrator | 2026-03-02 00:25:08.452618 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-02 00:25:08.452631 | orchestrator | Monday 02 March 2026 00:25:01 +0000 (0:00:00.162) 0:00:05.297 ********** 2026-03-02 00:25:08.452643 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:25:08.452657 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:25:08.452669 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:25:08.452682 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:25:08.452711 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:25:08.452724 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:25:08.452737 | orchestrator | 2026-03-02 00:25:08.452750 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-02 00:25:08.452763 | orchestrator | Monday 02 March 2026 00:25:01 +0000 (0:00:00.176) 0:00:05.473 ********** 2026-03-02 00:25:08.452777 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:25:08.452791 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:25:08.452805 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:25:08.452818 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:25:08.452831 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:25:08.452844 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:25:08.452856 | orchestrator | 2026-03-02 00:25:08.452869 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-02 00:25:08.452883 | orchestrator | Monday 02 March 2026 00:25:02 +0000 (0:00:00.612) 0:00:06.086 ********** 2026-03-02 00:25:08.452896 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:25:08.452909 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:25:08.452922 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:25:08.452934 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:25:08.452948 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:25:08.452961 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:25:08.452994 | orchestrator | 2026-03-02 00:25:08.453078 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-02 00:25:08.453105 | orchestrator | Monday 02 March 2026 00:25:02 +0000 (0:00:00.761) 0:00:06.847 ********** 2026-03-02 00:25:08.453122 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-02 00:25:08.453139 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-02 00:25:08.453156 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-02 00:25:08.453173 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-02 00:25:08.453190 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-02 00:25:08.453206 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-02 00:25:08.453223 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-02 00:25:08.453240 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-02 00:25:08.453257 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-02 00:25:08.453287 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-02 00:25:08.453303 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-02 00:25:08.453319 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-02 00:25:08.453335 | orchestrator | 2026-03-02 00:25:08.453351 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-02 00:25:08.453368 | orchestrator | Monday 02 March 2026 00:25:03 +0000 (0:00:01.182) 0:00:08.029 ********** 2026-03-02 00:25:08.453387 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:25:08.453404 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:25:08.453422 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:25:08.453440 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:25:08.453460 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:25:08.453479 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:25:08.453497 | orchestrator | 2026-03-02 00:25:08.453515 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-02 00:25:08.453527 | orchestrator | Monday 02 March 2026 00:25:05 +0000 (0:00:01.157) 0:00:09.187 ********** 2026-03-02 00:25:08.453539 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-02 00:25:08.453550 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-02 00:25:08.453562 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-02 00:25:08.453573 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-02 00:25:08.453584 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-02 00:25:08.453617 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-02 00:25:08.453628 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-02 00:25:08.453640 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-02 00:25:08.453650 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-02 00:25:08.453661 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-02 00:25:08.453672 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-02 00:25:08.453683 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-02 00:25:08.453694 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-02 00:25:08.453705 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-02 00:25:08.453716 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-02 00:25:08.453727 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-02 00:25:08.453738 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-02 00:25:08.453749 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-02 00:25:08.453760 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-02 00:25:08.453770 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-02 00:25:08.453781 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-02 00:25:08.453792 | orchestrator | 2026-03-02 00:25:08.453803 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-02 00:25:08.453815 | orchestrator | Monday 02 March 2026 00:25:06 +0000 (0:00:01.265) 0:00:10.452 ********** 2026-03-02 00:25:08.453826 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:25:08.453837 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:25:08.453848 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:25:08.453868 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:25:08.453879 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:25:08.453890 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:25:08.453901 | orchestrator | 2026-03-02 00:25:08.453912 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-02 00:25:08.453932 | orchestrator | Monday 02 March 2026 00:25:06 +0000 (0:00:00.122) 0:00:10.575 ********** 2026-03-02 00:25:08.453943 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:25:08.453954 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:25:08.453965 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:25:08.453976 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:25:08.453987 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:25:08.453998 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:25:08.454009 | orchestrator | 2026-03-02 00:25:08.454096 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-02 00:25:08.454110 | orchestrator | Monday 02 March 2026 00:25:06 +0000 (0:00:00.148) 0:00:10.724 ********** 2026-03-02 00:25:08.454122 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:25:08.454133 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:25:08.454144 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:25:08.454154 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:25:08.454165 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:25:08.454176 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:25:08.454187 | orchestrator | 2026-03-02 00:25:08.454198 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-02 00:25:08.454209 | orchestrator | Monday 02 March 2026 00:25:07 +0000 (0:00:00.577) 0:00:11.302 ********** 2026-03-02 00:25:08.454220 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:25:08.454230 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:25:08.454241 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:25:08.454252 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:25:08.454263 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:25:08.454273 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:25:08.454284 | orchestrator | 2026-03-02 00:25:08.454295 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-02 00:25:08.454306 | orchestrator | Monday 02 March 2026 00:25:07 +0000 (0:00:00.133) 0:00:11.436 ********** 2026-03-02 00:25:08.454317 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-02 00:25:08.454328 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-02 00:25:08.454377 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:25:08.454400 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:25:08.454412 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-02 00:25:08.454423 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:25:08.454433 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-02 00:25:08.454444 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:25:08.454455 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-02 00:25:08.454466 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-02 00:25:08.454476 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:25:08.454487 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:25:08.454498 | orchestrator | 2026-03-02 00:25:08.454509 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-02 00:25:08.454520 | orchestrator | Monday 02 March 2026 00:25:08 +0000 (0:00:00.841) 0:00:12.277 ********** 2026-03-02 00:25:08.454531 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:25:08.454542 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:25:08.454553 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:25:08.454563 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:25:08.454574 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:25:08.454585 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:25:08.454596 | orchestrator | 2026-03-02 00:25:08.454607 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-02 00:25:08.454618 | orchestrator | Monday 02 March 2026 00:25:08 +0000 (0:00:00.133) 0:00:12.411 ********** 2026-03-02 00:25:08.454629 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:25:08.454639 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:25:08.454650 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:25:08.454661 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:25:08.454688 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:25:09.647396 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:25:09.647517 | orchestrator | 2026-03-02 00:25:09.647541 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-02 00:25:09.647559 | orchestrator | Monday 02 March 2026 00:25:08 +0000 (0:00:00.108) 0:00:12.519 ********** 2026-03-02 00:25:09.647576 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:25:09.647591 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:25:09.647607 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:25:09.647624 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:25:09.647641 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:25:09.647653 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:25:09.647662 | orchestrator | 2026-03-02 00:25:09.647673 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-02 00:25:09.647683 | orchestrator | Monday 02 March 2026 00:25:08 +0000 (0:00:00.140) 0:00:12.660 ********** 2026-03-02 00:25:09.647693 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:25:09.647703 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:25:09.647714 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:25:09.647723 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:25:09.647733 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:25:09.647743 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:25:09.647758 | orchestrator | 2026-03-02 00:25:09.647774 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-02 00:25:09.647790 | orchestrator | Monday 02 March 2026 00:25:09 +0000 (0:00:00.714) 0:00:13.375 ********** 2026-03-02 00:25:09.647806 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:25:09.647822 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:25:09.647834 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:25:09.647844 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:25:09.647854 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:25:09.647864 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:25:09.647874 | orchestrator | 2026-03-02 00:25:09.647884 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:25:09.647895 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 00:25:09.647928 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 00:25:09.647941 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 00:25:09.647953 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 00:25:09.647966 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 00:25:09.647978 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 00:25:09.647990 | orchestrator | 2026-03-02 00:25:09.648002 | orchestrator | 2026-03-02 00:25:09.648014 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:25:09.648060 | orchestrator | Monday 02 March 2026 00:25:09 +0000 (0:00:00.174) 0:00:13.550 ********** 2026-03-02 00:25:09.648072 | orchestrator | =============================================================================== 2026-03-02 00:25:09.648083 | orchestrator | Gathering Facts --------------------------------------------------------- 4.18s 2026-03-02 00:25:09.648095 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2026-03-02 00:25:09.648107 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-03-02 00:25:09.648143 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.16s 2026-03-02 00:25:09.648155 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.84s 2026-03-02 00:25:09.648167 | orchestrator | Do not require tty for all users ---------------------------------------- 0.82s 2026-03-02 00:25:09.648179 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.76s 2026-03-02 00:25:09.648191 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2026-03-02 00:25:09.648203 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-03-02 00:25:09.648214 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2026-03-02 00:25:09.648225 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-03-02 00:25:09.648237 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.17s 2026-03-02 00:25:09.648249 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-03-02 00:25:09.648261 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.15s 2026-03-02 00:25:09.648273 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-03-02 00:25:09.648285 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.13s 2026-03-02 00:25:09.648296 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2026-03-02 00:25:09.648305 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.12s 2026-03-02 00:25:09.648315 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.11s 2026-03-02 00:25:09.835701 | orchestrator | + osism apply --environment custom facts 2026-03-02 00:25:11.521483 | orchestrator | 2026-03-02 00:25:11 | INFO  | Trying to run play facts in environment custom 2026-03-02 00:25:21.544496 | orchestrator | 2026-03-02 00:25:21 | INFO  | Prepare task for execution of facts. 2026-03-02 00:25:21.609447 | orchestrator | 2026-03-02 00:25:21 | INFO  | Task c0c67f6e-5b1e-46af-afcc-e7f7274a9904 (facts) was prepared for execution. 2026-03-02 00:25:21.609548 | orchestrator | 2026-03-02 00:25:21 | INFO  | It takes a moment until task c0c67f6e-5b1e-46af-afcc-e7f7274a9904 (facts) has been started and output is visible here. 2026-03-02 00:26:05.409725 | orchestrator | 2026-03-02 00:26:05.409859 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-02 00:26:05.409877 | orchestrator | 2026-03-02 00:26:05.409890 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-02 00:26:05.409902 | orchestrator | Monday 02 March 2026 00:25:25 +0000 (0:00:00.051) 0:00:00.051 ********** 2026-03-02 00:26:05.409914 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:05.409926 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:26:05.409939 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:05.409950 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:26:05.409961 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:05.409972 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:05.409983 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:26:05.410099 | orchestrator | 2026-03-02 00:26:05.410114 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-02 00:26:05.410126 | orchestrator | Monday 02 March 2026 00:25:26 +0000 (0:00:01.411) 0:00:01.463 ********** 2026-03-02 00:26:05.410137 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:05.410149 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:05.410160 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:05.410171 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:26:05.410184 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:26:05.410195 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:26:05.410222 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:05.410235 | orchestrator | 2026-03-02 00:26:05.410270 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-02 00:26:05.410284 | orchestrator | 2026-03-02 00:26:05.410296 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-02 00:26:05.410322 | orchestrator | Monday 02 March 2026 00:25:27 +0000 (0:00:01.143) 0:00:02.607 ********** 2026-03-02 00:26:05.410335 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:05.410349 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:05.410361 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:05.410375 | orchestrator | 2026-03-02 00:26:05.410388 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-02 00:26:05.410402 | orchestrator | Monday 02 March 2026 00:25:27 +0000 (0:00:00.073) 0:00:02.680 ********** 2026-03-02 00:26:05.410513 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:05.410527 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:05.410540 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:05.410550 | orchestrator | 2026-03-02 00:26:05.410561 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-02 00:26:05.410572 | orchestrator | Monday 02 March 2026 00:25:27 +0000 (0:00:00.160) 0:00:02.841 ********** 2026-03-02 00:26:05.410583 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:05.410594 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:05.410604 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:05.410615 | orchestrator | 2026-03-02 00:26:05.410626 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-02 00:26:05.410637 | orchestrator | Monday 02 March 2026 00:25:28 +0000 (0:00:00.177) 0:00:03.019 ********** 2026-03-02 00:26:05.410649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:26:05.410661 | orchestrator | 2026-03-02 00:26:05.410672 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-02 00:26:05.410683 | orchestrator | Monday 02 March 2026 00:25:28 +0000 (0:00:00.100) 0:00:03.119 ********** 2026-03-02 00:26:05.410694 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:05.410705 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:05.410715 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:05.410726 | orchestrator | 2026-03-02 00:26:05.410737 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-02 00:26:05.410748 | orchestrator | Monday 02 March 2026 00:25:28 +0000 (0:00:00.416) 0:00:03.536 ********** 2026-03-02 00:26:05.410758 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:26:05.410769 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:26:05.410780 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:26:05.410790 | orchestrator | 2026-03-02 00:26:05.410801 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-02 00:26:05.410812 | orchestrator | Monday 02 March 2026 00:25:28 +0000 (0:00:00.095) 0:00:03.631 ********** 2026-03-02 00:26:05.410823 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:05.410834 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:05.410844 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:05.410855 | orchestrator | 2026-03-02 00:26:05.410866 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-02 00:26:05.410877 | orchestrator | Monday 02 March 2026 00:25:29 +0000 (0:00:01.084) 0:00:04.715 ********** 2026-03-02 00:26:05.410888 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:05.410898 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:05.410909 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:05.410920 | orchestrator | 2026-03-02 00:26:05.410931 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-02 00:26:05.410941 | orchestrator | Monday 02 March 2026 00:25:30 +0000 (0:00:00.440) 0:00:05.156 ********** 2026-03-02 00:26:05.410952 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:05.410963 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:05.410974 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:05.410985 | orchestrator | 2026-03-02 00:26:05.411027 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-02 00:26:05.411040 | orchestrator | Monday 02 March 2026 00:25:31 +0000 (0:00:01.054) 0:00:06.212 ********** 2026-03-02 00:26:05.411051 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:05.411061 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:05.411072 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:05.411083 | orchestrator | 2026-03-02 00:26:05.411094 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-02 00:26:05.411105 | orchestrator | Monday 02 March 2026 00:25:47 +0000 (0:00:16.516) 0:00:22.729 ********** 2026-03-02 00:26:05.411115 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:26:05.411126 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:26:05.411137 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:26:05.411148 | orchestrator | 2026-03-02 00:26:05.411159 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-02 00:26:05.411190 | orchestrator | Monday 02 March 2026 00:25:47 +0000 (0:00:00.100) 0:00:22.829 ********** 2026-03-02 00:26:05.411201 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:05.411212 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:05.411223 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:05.411234 | orchestrator | 2026-03-02 00:26:05.411244 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-02 00:26:05.411255 | orchestrator | Monday 02 March 2026 00:25:56 +0000 (0:00:08.343) 0:00:31.172 ********** 2026-03-02 00:26:05.411266 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:05.411277 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:05.411287 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:05.411298 | orchestrator | 2026-03-02 00:26:05.411309 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-02 00:26:05.411320 | orchestrator | Monday 02 March 2026 00:25:56 +0000 (0:00:00.444) 0:00:31.617 ********** 2026-03-02 00:26:05.411331 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-02 00:26:05.411342 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-02 00:26:05.411353 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-02 00:26:05.411363 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-02 00:26:05.411374 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-02 00:26:05.411385 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-02 00:26:05.411396 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-02 00:26:05.411406 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-02 00:26:05.411417 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-02 00:26:05.411428 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-02 00:26:05.411439 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-02 00:26:05.411449 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-02 00:26:05.411460 | orchestrator | 2026-03-02 00:26:05.411471 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-02 00:26:05.411481 | orchestrator | Monday 02 March 2026 00:26:00 +0000 (0:00:03.639) 0:00:35.256 ********** 2026-03-02 00:26:05.411492 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:05.411503 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:05.411514 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:05.411524 | orchestrator | 2026-03-02 00:26:05.411535 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-02 00:26:05.411546 | orchestrator | 2026-03-02 00:26:05.411557 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-02 00:26:05.411568 | orchestrator | Monday 02 March 2026 00:26:01 +0000 (0:00:01.297) 0:00:36.554 ********** 2026-03-02 00:26:05.411579 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:05.411596 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:05.411606 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:05.411617 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:05.411628 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:05.411691 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:05.411703 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:05.411714 | orchestrator | 2026-03-02 00:26:05.411725 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:26:05.411737 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:26:05.411749 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:26:05.411762 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:26:05.411773 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:26:05.411784 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:26:05.411795 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:26:05.411806 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:26:05.411817 | orchestrator | 2026-03-02 00:26:05.411828 | orchestrator | 2026-03-02 00:26:05.411839 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:26:05.411850 | orchestrator | Monday 02 March 2026 00:26:05 +0000 (0:00:03.753) 0:00:40.307 ********** 2026-03-02 00:26:05.411861 | orchestrator | =============================================================================== 2026-03-02 00:26:05.411871 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.52s 2026-03-02 00:26:05.411882 | orchestrator | Install required packages (Debian) -------------------------------------- 8.34s 2026-03-02 00:26:05.411893 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.75s 2026-03-02 00:26:05.411903 | orchestrator | Copy fact files --------------------------------------------------------- 3.64s 2026-03-02 00:26:05.411914 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-03-02 00:26:05.411925 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.30s 2026-03-02 00:26:05.411942 | orchestrator | Copy fact file ---------------------------------------------------------- 1.14s 2026-03-02 00:26:05.533112 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.08s 2026-03-02 00:26:05.533217 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-03-02 00:26:05.533233 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-03-02 00:26:05.533245 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-03-02 00:26:05.533256 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-03-02 00:26:05.533268 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2026-03-02 00:26:05.533279 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2026-03-02 00:26:05.533290 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.10s 2026-03-02 00:26:05.533302 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-02 00:26:05.533313 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2026-03-02 00:26:05.533345 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.07s 2026-03-02 00:26:05.729567 | orchestrator | + osism apply bootstrap 2026-03-02 00:26:17.560206 | orchestrator | 2026-03-02 00:26:17 | INFO  | Prepare task for execution of bootstrap. 2026-03-02 00:26:17.632504 | orchestrator | 2026-03-02 00:26:17 | INFO  | Task ab0e69f8-c4fb-4319-9464-f6f161862c35 (bootstrap) was prepared for execution. 2026-03-02 00:26:17.632613 | orchestrator | 2026-03-02 00:26:17 | INFO  | It takes a moment until task ab0e69f8-c4fb-4319-9464-f6f161862c35 (bootstrap) has been started and output is visible here. 2026-03-02 00:26:33.703070 | orchestrator | 2026-03-02 00:26:33.703189 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-02 00:26:33.703208 | orchestrator | 2026-03-02 00:26:33.703221 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-02 00:26:33.703233 | orchestrator | Monday 02 March 2026 00:26:21 +0000 (0:00:00.104) 0:00:00.104 ********** 2026-03-02 00:26:33.703245 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:33.703257 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:33.703268 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:33.703279 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:33.703290 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:33.703301 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:33.703312 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:33.703323 | orchestrator | 2026-03-02 00:26:33.703334 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-02 00:26:33.703345 | orchestrator | 2026-03-02 00:26:33.703356 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-02 00:26:33.703367 | orchestrator | Monday 02 March 2026 00:26:21 +0000 (0:00:00.167) 0:00:00.271 ********** 2026-03-02 00:26:33.703378 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:33.703390 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:33.703402 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:33.703413 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:33.703424 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:33.703435 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:33.703445 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:33.703456 | orchestrator | 2026-03-02 00:26:33.703467 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-02 00:26:33.703478 | orchestrator | 2026-03-02 00:26:33.703490 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-02 00:26:33.703501 | orchestrator | Monday 02 March 2026 00:26:26 +0000 (0:00:04.643) 0:00:04.915 ********** 2026-03-02 00:26:33.703513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:26:33.703524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:26:33.703536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:26:33.703547 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-02 00:26:33.703557 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-02 00:26:33.703572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-02 00:26:33.703584 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-02 00:26:33.703597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-02 00:26:33.703610 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-02 00:26:33.703622 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-02 00:26:33.703636 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-02 00:26:33.703649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-02 00:26:33.703662 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-02 00:26:33.703675 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-02 00:26:33.703689 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:26:33.703701 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-02 00:26:33.703733 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-02 00:26:33.703747 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-02 00:26:33.703759 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-02 00:26:33.703771 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-02 00:26:33.703784 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-02 00:26:33.703796 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-02 00:26:33.703813 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:26:33.703832 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-02 00:26:33.703850 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-02 00:26:33.703869 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-02 00:26:33.703889 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-02 00:26:33.703910 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-02 00:26:33.703931 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-02 00:26:33.704038 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-02 00:26:33.704072 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-02 00:26:33.704092 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-02 00:26:33.704110 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-02 00:26:33.704130 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-02 00:26:33.704150 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-02 00:26:33.704169 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-02 00:26:33.704188 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-02 00:26:33.704207 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-02 00:26:33.704225 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:26:33.704243 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-02 00:26:33.704260 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-02 00:26:33.704278 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:26:33.704318 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-02 00:26:33.704340 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-02 00:26:33.704359 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-02 00:26:33.704379 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-02 00:26:33.704421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-02 00:26:33.704441 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:26:33.704459 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-02 00:26:33.704477 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-02 00:26:33.704496 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-02 00:26:33.704515 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:26:33.704533 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-02 00:26:33.704553 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-02 00:26:33.704573 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-02 00:26:33.704592 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:26:33.704611 | orchestrator | 2026-03-02 00:26:33.704630 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-02 00:26:33.704649 | orchestrator | 2026-03-02 00:26:33.704668 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-02 00:26:33.704686 | orchestrator | Monday 02 March 2026 00:26:26 +0000 (0:00:00.401) 0:00:05.316 ********** 2026-03-02 00:26:33.704705 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:33.704724 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:33.704760 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:33.704772 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:33.704784 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:33.704794 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:33.704805 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:33.704816 | orchestrator | 2026-03-02 00:26:33.704828 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-02 00:26:33.704839 | orchestrator | Monday 02 March 2026 00:26:27 +0000 (0:00:01.216) 0:00:06.533 ********** 2026-03-02 00:26:33.704850 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:33.704861 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:33.704872 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:33.704883 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:33.704894 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:33.704905 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:33.704948 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:33.704959 | orchestrator | 2026-03-02 00:26:33.704970 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-02 00:26:33.704981 | orchestrator | Monday 02 March 2026 00:26:29 +0000 (0:00:01.180) 0:00:07.714 ********** 2026-03-02 00:26:33.705163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:26:33.705186 | orchestrator | 2026-03-02 00:26:33.705198 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-02 00:26:33.705209 | orchestrator | Monday 02 March 2026 00:26:29 +0000 (0:00:00.244) 0:00:07.958 ********** 2026-03-02 00:26:33.705220 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:33.705232 | orchestrator | changed: [testbed-manager] 2026-03-02 00:26:33.705243 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:26:33.705254 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:26:33.705265 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:33.705276 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:33.705286 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:26:33.705297 | orchestrator | 2026-03-02 00:26:33.705308 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-02 00:26:33.705319 | orchestrator | Monday 02 March 2026 00:26:31 +0000 (0:00:01.934) 0:00:09.893 ********** 2026-03-02 00:26:33.705331 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:26:33.705343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:26:33.705356 | orchestrator | 2026-03-02 00:26:33.705367 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-02 00:26:33.705378 | orchestrator | Monday 02 March 2026 00:26:31 +0000 (0:00:00.256) 0:00:10.149 ********** 2026-03-02 00:26:33.705388 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:33.705398 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:26:33.705407 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:33.705417 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:33.705427 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:26:33.705450 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:26:33.705461 | orchestrator | 2026-03-02 00:26:33.705471 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-02 00:26:33.705481 | orchestrator | Monday 02 March 2026 00:26:32 +0000 (0:00:01.005) 0:00:11.154 ********** 2026-03-02 00:26:33.705490 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:26:33.705500 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:26:33.705510 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:33.705519 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:33.705529 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:26:33.705538 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:33.705558 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:26:33.705568 | orchestrator | 2026-03-02 00:26:33.705578 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-02 00:26:33.705591 | orchestrator | Monday 02 March 2026 00:26:33 +0000 (0:00:00.651) 0:00:11.806 ********** 2026-03-02 00:26:33.705602 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:26:33.705611 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:26:33.705621 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:26:33.705631 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:26:33.705640 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:26:33.705701 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:26:33.705712 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:33.705722 | orchestrator | 2026-03-02 00:26:33.705732 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-02 00:26:33.705743 | orchestrator | Monday 02 March 2026 00:26:33 +0000 (0:00:00.461) 0:00:12.267 ********** 2026-03-02 00:26:33.705753 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:26:33.705763 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:26:33.705845 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:26:45.609636 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:26:45.609754 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:26:45.609770 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:26:45.609783 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:26:45.609795 | orchestrator | 2026-03-02 00:26:45.609808 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-02 00:26:45.609821 | orchestrator | Monday 02 March 2026 00:26:33 +0000 (0:00:00.196) 0:00:12.464 ********** 2026-03-02 00:26:45.609833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:26:45.609862 | orchestrator | 2026-03-02 00:26:45.609874 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-02 00:26:45.609887 | orchestrator | Monday 02 March 2026 00:26:34 +0000 (0:00:00.271) 0:00:12.735 ********** 2026-03-02 00:26:45.609899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:26:45.609910 | orchestrator | 2026-03-02 00:26:45.609921 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-02 00:26:45.609932 | orchestrator | Monday 02 March 2026 00:26:34 +0000 (0:00:00.404) 0:00:13.139 ********** 2026-03-02 00:26:45.609943 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.609955 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:45.609966 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:45.609977 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:45.610109 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:45.610120 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:45.610131 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:45.610142 | orchestrator | 2026-03-02 00:26:45.610155 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-02 00:26:45.610168 | orchestrator | Monday 02 March 2026 00:26:35 +0000 (0:00:01.206) 0:00:14.346 ********** 2026-03-02 00:26:45.610181 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:26:45.610194 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:26:45.610208 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:26:45.610220 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:26:45.610232 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:26:45.610251 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:26:45.610270 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:26:45.610288 | orchestrator | 2026-03-02 00:26:45.610305 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-02 00:26:45.610362 | orchestrator | Monday 02 March 2026 00:26:35 +0000 (0:00:00.236) 0:00:14.583 ********** 2026-03-02 00:26:45.610383 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:45.610403 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:45.610424 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.610438 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:45.610452 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:45.610465 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:45.610477 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:45.610490 | orchestrator | 2026-03-02 00:26:45.610504 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-02 00:26:45.610517 | orchestrator | Monday 02 March 2026 00:26:36 +0000 (0:00:00.588) 0:00:15.171 ********** 2026-03-02 00:26:45.610530 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:26:45.610543 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:26:45.610555 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:26:45.610566 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:26:45.610578 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:26:45.610589 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:26:45.610600 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:26:45.610611 | orchestrator | 2026-03-02 00:26:45.610622 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-02 00:26:45.610635 | orchestrator | Monday 02 March 2026 00:26:36 +0000 (0:00:00.223) 0:00:15.395 ********** 2026-03-02 00:26:45.610646 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:45.610657 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.610668 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:26:45.610679 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:26:45.610690 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:26:45.610701 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:45.610712 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:45.610723 | orchestrator | 2026-03-02 00:26:45.610734 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-02 00:26:45.610745 | orchestrator | Monday 02 March 2026 00:26:37 +0000 (0:00:00.548) 0:00:15.943 ********** 2026-03-02 00:26:45.610757 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.610768 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:45.610779 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:45.610790 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:45.610801 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:26:45.610812 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:26:45.610823 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:26:45.610834 | orchestrator | 2026-03-02 00:26:45.610855 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-02 00:26:45.610867 | orchestrator | Monday 02 March 2026 00:26:38 +0000 (0:00:01.138) 0:00:17.082 ********** 2026-03-02 00:26:45.610878 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.610889 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:45.610901 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:45.610912 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:45.610923 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:45.610934 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:45.610945 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:45.610956 | orchestrator | 2026-03-02 00:26:45.610967 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-02 00:26:45.611015 | orchestrator | Monday 02 March 2026 00:26:39 +0000 (0:00:01.094) 0:00:18.176 ********** 2026-03-02 00:26:45.611051 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:26:45.611064 | orchestrator | 2026-03-02 00:26:45.611076 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-02 00:26:45.611087 | orchestrator | Monday 02 March 2026 00:26:39 +0000 (0:00:00.315) 0:00:18.492 ********** 2026-03-02 00:26:45.611109 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:26:45.611121 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:26:45.611132 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:26:45.611143 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:26:45.611154 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:26:45.611165 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:26:45.611176 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:26:45.611187 | orchestrator | 2026-03-02 00:26:45.611198 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-02 00:26:45.611209 | orchestrator | Monday 02 March 2026 00:26:41 +0000 (0:00:01.270) 0:00:19.763 ********** 2026-03-02 00:26:45.611220 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:45.611231 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:45.611242 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:45.611253 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.611264 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:45.611275 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:45.611286 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:45.611297 | orchestrator | 2026-03-02 00:26:45.611308 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-02 00:26:45.611319 | orchestrator | Monday 02 March 2026 00:26:41 +0000 (0:00:00.202) 0:00:19.966 ********** 2026-03-02 00:26:45.611330 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:45.611341 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:45.611352 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:45.611363 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.611374 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:45.611385 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:45.611396 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:45.611407 | orchestrator | 2026-03-02 00:26:45.611418 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-02 00:26:45.611430 | orchestrator | Monday 02 March 2026 00:26:41 +0000 (0:00:00.218) 0:00:20.184 ********** 2026-03-02 00:26:45.611441 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:45.611452 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:45.611463 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:45.611474 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.611485 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:45.611496 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:45.611507 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:45.611518 | orchestrator | 2026-03-02 00:26:45.611529 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-02 00:26:45.611540 | orchestrator | Monday 02 March 2026 00:26:41 +0000 (0:00:00.227) 0:00:20.411 ********** 2026-03-02 00:26:45.611552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:26:45.611565 | orchestrator | 2026-03-02 00:26:45.611576 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-02 00:26:45.611587 | orchestrator | Monday 02 March 2026 00:26:42 +0000 (0:00:00.284) 0:00:20.696 ********** 2026-03-02 00:26:45.611598 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:45.611609 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.611620 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:45.611631 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:45.611642 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:45.611653 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:45.611664 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:45.611675 | orchestrator | 2026-03-02 00:26:45.611686 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-02 00:26:45.611697 | orchestrator | Monday 02 March 2026 00:26:42 +0000 (0:00:00.582) 0:00:21.279 ********** 2026-03-02 00:26:45.611708 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:26:45.611719 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:26:45.611738 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:26:45.611749 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:26:45.611760 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:26:45.611771 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:26:45.611782 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:26:45.611793 | orchestrator | 2026-03-02 00:26:45.611804 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-02 00:26:45.611815 | orchestrator | Monday 02 March 2026 00:26:42 +0000 (0:00:00.215) 0:00:21.494 ********** 2026-03-02 00:26:45.611827 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.611838 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:45.611849 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:26:45.611860 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:45.611871 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:45.611882 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:26:45.611893 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:26:45.611904 | orchestrator | 2026-03-02 00:26:45.611916 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-02 00:26:45.611927 | orchestrator | Monday 02 March 2026 00:26:43 +0000 (0:00:01.163) 0:00:22.658 ********** 2026-03-02 00:26:45.611938 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:45.611950 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:45.611961 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.611972 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:26:45.612031 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:26:45.612043 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:45.612054 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:26:45.612065 | orchestrator | 2026-03-02 00:26:45.612076 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-02 00:26:45.612087 | orchestrator | Monday 02 March 2026 00:26:44 +0000 (0:00:00.612) 0:00:23.270 ********** 2026-03-02 00:26:45.612098 | orchestrator | ok: [testbed-manager] 2026-03-02 00:26:45.612109 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:26:45.612120 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:26:45.612131 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:26:45.612150 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:27:25.371413 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:27:25.371564 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:27:25.371583 | orchestrator | 2026-03-02 00:27:25.371596 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-02 00:27:25.371610 | orchestrator | Monday 02 March 2026 00:26:45 +0000 (0:00:01.098) 0:00:24.369 ********** 2026-03-02 00:27:25.371621 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:27:25.371633 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:27:25.371644 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:27:25.371655 | orchestrator | changed: [testbed-manager] 2026-03-02 00:27:25.371666 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:27:25.371677 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:27:25.371688 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:27:25.371699 | orchestrator | 2026-03-02 00:27:25.371710 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-02 00:27:25.371722 | orchestrator | Monday 02 March 2026 00:27:02 +0000 (0:00:16.789) 0:00:41.159 ********** 2026-03-02 00:27:25.371734 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:27:25.371745 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:27:25.371756 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:27:25.371767 | orchestrator | ok: [testbed-manager] 2026-03-02 00:27:25.371778 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:27:25.371789 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:27:25.371799 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:27:25.371810 | orchestrator | 2026-03-02 00:27:25.371822 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-02 00:27:25.371841 | orchestrator | Monday 02 March 2026 00:27:02 +0000 (0:00:00.212) 0:00:41.372 ********** 2026-03-02 00:27:25.371859 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:27:25.371912 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:27:25.371931 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:27:25.371948 | orchestrator | ok: [testbed-manager] 2026-03-02 00:27:25.371992 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:27:25.372013 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:27:25.372033 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:27:25.372054 | orchestrator | 2026-03-02 00:27:25.372073 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-02 00:27:25.372093 | orchestrator | Monday 02 March 2026 00:27:02 +0000 (0:00:00.210) 0:00:41.582 ********** 2026-03-02 00:27:25.372107 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:27:25.372120 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:27:25.372133 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:27:25.372153 | orchestrator | ok: [testbed-manager] 2026-03-02 00:27:25.372172 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:27:25.372191 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:27:25.372210 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:27:25.372228 | orchestrator | 2026-03-02 00:27:25.372246 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-02 00:27:25.372263 | orchestrator | Monday 02 March 2026 00:27:03 +0000 (0:00:00.216) 0:00:41.799 ********** 2026-03-02 00:27:25.372286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:27:25.372309 | orchestrator | 2026-03-02 00:27:25.372329 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-02 00:27:25.372347 | orchestrator | Monday 02 March 2026 00:27:03 +0000 (0:00:00.274) 0:00:42.073 ********** 2026-03-02 00:27:25.372366 | orchestrator | ok: [testbed-manager] 2026-03-02 00:27:25.372479 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:27:25.372491 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:27:25.372502 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:27:25.372534 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:27:25.372545 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:27:25.372556 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:27:25.372567 | orchestrator | 2026-03-02 00:27:25.372578 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-02 00:27:25.372589 | orchestrator | Monday 02 March 2026 00:27:05 +0000 (0:00:01.961) 0:00:44.035 ********** 2026-03-02 00:27:25.372601 | orchestrator | changed: [testbed-manager] 2026-03-02 00:27:25.372612 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:27:25.372627 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:27:25.372646 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:27:25.372665 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:27:25.372683 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:27:25.372701 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:27:25.372721 | orchestrator | 2026-03-02 00:27:25.372742 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-02 00:27:25.372762 | orchestrator | Monday 02 March 2026 00:27:06 +0000 (0:00:01.180) 0:00:45.216 ********** 2026-03-02 00:27:25.372779 | orchestrator | ok: [testbed-manager] 2026-03-02 00:27:25.372799 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:27:25.372816 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:27:25.372827 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:27:25.372838 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:27:25.372849 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:27:25.372860 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:27:25.372870 | orchestrator | 2026-03-02 00:27:25.372882 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-02 00:27:25.372893 | orchestrator | Monday 02 March 2026 00:27:07 +0000 (0:00:00.829) 0:00:46.045 ********** 2026-03-02 00:27:25.372913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:27:25.372950 | orchestrator | 2026-03-02 00:27:25.372993 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-02 00:27:25.373013 | orchestrator | Monday 02 March 2026 00:27:07 +0000 (0:00:00.283) 0:00:46.329 ********** 2026-03-02 00:27:25.373032 | orchestrator | changed: [testbed-manager] 2026-03-02 00:27:25.373052 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:27:25.373071 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:27:25.373089 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:27:25.373109 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:27:25.373128 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:27:25.373146 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:27:25.373164 | orchestrator | 2026-03-02 00:27:25.373212 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-02 00:27:25.373233 | orchestrator | Monday 02 March 2026 00:27:08 +0000 (0:00:01.025) 0:00:47.354 ********** 2026-03-02 00:27:25.373252 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:27:25.373271 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:27:25.373290 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:27:25.373310 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:27:25.373328 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:27:25.373347 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:27:25.373366 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:27:25.373385 | orchestrator | 2026-03-02 00:27:25.373403 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-02 00:27:25.373422 | orchestrator | Monday 02 March 2026 00:27:08 +0000 (0:00:00.204) 0:00:47.558 ********** 2026-03-02 00:27:25.373443 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:27:25.373462 | orchestrator | 2026-03-02 00:27:25.373480 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-02 00:27:25.373499 | orchestrator | Monday 02 March 2026 00:27:09 +0000 (0:00:00.280) 0:00:47.839 ********** 2026-03-02 00:27:25.373518 | orchestrator | ok: [testbed-manager] 2026-03-02 00:27:25.373536 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:27:25.373554 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:27:25.373566 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:27:25.373577 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:27:25.373588 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:27:25.373598 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:27:25.373609 | orchestrator | 2026-03-02 00:27:25.373620 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-02 00:27:25.373631 | orchestrator | Monday 02 March 2026 00:27:10 +0000 (0:00:01.820) 0:00:49.660 ********** 2026-03-02 00:27:25.373642 | orchestrator | changed: [testbed-manager] 2026-03-02 00:27:25.373653 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:27:25.373664 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:27:25.373675 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:27:25.373686 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:27:25.373696 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:27:25.373707 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:27:25.373718 | orchestrator | 2026-03-02 00:27:25.373729 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-02 00:27:25.373740 | orchestrator | Monday 02 March 2026 00:27:12 +0000 (0:00:01.069) 0:00:50.730 ********** 2026-03-02 00:27:25.373751 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:27:25.373762 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:27:25.373772 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:27:25.373783 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:27:25.373794 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:27:25.373805 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:27:25.373827 | orchestrator | changed: [testbed-manager] 2026-03-02 00:27:25.373839 | orchestrator | 2026-03-02 00:27:25.373850 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-02 00:27:25.373861 | orchestrator | Monday 02 March 2026 00:27:22 +0000 (0:00:10.431) 0:01:01.161 ********** 2026-03-02 00:27:25.373872 | orchestrator | ok: [testbed-manager] 2026-03-02 00:27:25.373883 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:27:25.373894 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:27:25.373905 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:27:25.373916 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:27:25.373927 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:27:25.373937 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:27:25.373948 | orchestrator | 2026-03-02 00:27:25.373959 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-02 00:27:25.374000 | orchestrator | Monday 02 March 2026 00:27:23 +0000 (0:00:01.165) 0:01:02.326 ********** 2026-03-02 00:27:25.374012 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:27:25.374175 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:27:25.374188 | orchestrator | ok: [testbed-manager] 2026-03-02 00:27:25.374199 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:27:25.374210 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:27:25.374221 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:27:25.374232 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:27:25.374243 | orchestrator | 2026-03-02 00:27:25.374254 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-02 00:27:25.374265 | orchestrator | Monday 02 March 2026 00:27:24 +0000 (0:00:00.974) 0:01:03.301 ********** 2026-03-02 00:27:25.374276 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:27:25.374287 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:27:25.374298 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:27:25.374309 | orchestrator | ok: [testbed-manager] 2026-03-02 00:27:25.374319 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:27:25.374330 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:27:25.374341 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:27:25.374351 | orchestrator | 2026-03-02 00:27:25.374362 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-02 00:27:25.374374 | orchestrator | Monday 02 March 2026 00:27:24 +0000 (0:00:00.226) 0:01:03.528 ********** 2026-03-02 00:27:25.374384 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:27:25.374395 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:27:25.374406 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:27:25.374417 | orchestrator | ok: [testbed-manager] 2026-03-02 00:27:25.374437 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:27:25.374458 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:27:25.374502 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:27:25.374521 | orchestrator | 2026-03-02 00:27:25.374540 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-02 00:27:25.374559 | orchestrator | Monday 02 March 2026 00:27:25 +0000 (0:00:00.218) 0:01:03.746 ********** 2026-03-02 00:27:25.374580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:27:25.374599 | orchestrator | 2026-03-02 00:27:25.374637 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-02 00:29:46.870841 | orchestrator | Monday 02 March 2026 00:27:25 +0000 (0:00:00.301) 0:01:04.048 ********** 2026-03-02 00:29:46.870990 | orchestrator | ok: [testbed-manager] 2026-03-02 00:29:46.871008 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:29:46.871020 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:29:46.871030 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:29:46.871040 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:29:46.871049 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:29:46.871059 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:29:46.871069 | orchestrator | 2026-03-02 00:29:46.871080 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-02 00:29:46.871113 | orchestrator | Monday 02 March 2026 00:27:27 +0000 (0:00:02.162) 0:01:06.210 ********** 2026-03-02 00:29:46.871124 | orchestrator | changed: [testbed-manager] 2026-03-02 00:29:46.871136 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:29:46.871146 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:29:46.871155 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:29:46.871165 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:29:46.871174 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:29:46.871184 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:29:46.871194 | orchestrator | 2026-03-02 00:29:46.871204 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-02 00:29:46.871215 | orchestrator | Monday 02 March 2026 00:27:28 +0000 (0:00:00.649) 0:01:06.859 ********** 2026-03-02 00:29:46.871224 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:29:46.871234 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:29:46.871244 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:29:46.871254 | orchestrator | ok: [testbed-manager] 2026-03-02 00:29:46.871263 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:29:46.871273 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:29:46.871282 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:29:46.871292 | orchestrator | 2026-03-02 00:29:46.871302 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-02 00:29:46.871312 | orchestrator | Monday 02 March 2026 00:27:28 +0000 (0:00:00.168) 0:01:07.028 ********** 2026-03-02 00:29:46.871321 | orchestrator | ok: [testbed-manager] 2026-03-02 00:29:46.871331 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:29:46.871340 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:29:46.871350 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:29:46.871359 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:29:46.871369 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:29:46.871379 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:29:46.871390 | orchestrator | 2026-03-02 00:29:46.871401 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-02 00:29:46.871412 | orchestrator | Monday 02 March 2026 00:27:29 +0000 (0:00:01.191) 0:01:08.220 ********** 2026-03-02 00:29:46.871424 | orchestrator | changed: [testbed-manager] 2026-03-02 00:29:46.871435 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:29:46.871447 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:29:46.871459 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:29:46.871470 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:29:46.871481 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:29:46.871493 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:29:46.871504 | orchestrator | 2026-03-02 00:29:46.871515 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-02 00:29:46.871526 | orchestrator | Monday 02 March 2026 00:27:31 +0000 (0:00:01.718) 0:01:09.939 ********** 2026-03-02 00:29:46.871537 | orchestrator | ok: [testbed-manager] 2026-03-02 00:29:46.871548 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:29:46.871559 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:29:46.871570 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:29:46.871581 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:29:46.871592 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:29:46.871603 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:29:46.871614 | orchestrator | 2026-03-02 00:29:46.871626 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-02 00:29:46.871637 | orchestrator | Monday 02 March 2026 00:27:33 +0000 (0:00:02.560) 0:01:12.499 ********** 2026-03-02 00:29:46.871649 | orchestrator | ok: [testbed-manager] 2026-03-02 00:29:46.871660 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:29:46.871671 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:29:46.871681 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:29:46.871693 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:29:46.871704 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:29:46.871716 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:29:46.871728 | orchestrator | 2026-03-02 00:29:46.871739 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-02 00:29:46.871756 | orchestrator | Monday 02 March 2026 00:28:07 +0000 (0:00:33.704) 0:01:46.204 ********** 2026-03-02 00:29:46.871797 | orchestrator | changed: [testbed-manager] 2026-03-02 00:29:46.871807 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:29:46.871828 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:29:46.871838 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:29:46.871848 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:29:46.871857 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:29:46.871866 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:29:46.871876 | orchestrator | 2026-03-02 00:29:46.871886 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-02 00:29:46.871895 | orchestrator | Monday 02 March 2026 00:29:33 +0000 (0:01:25.566) 0:03:11.772 ********** 2026-03-02 00:29:46.871905 | orchestrator | ok: [testbed-manager] 2026-03-02 00:29:46.871933 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:29:46.871943 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:29:46.871953 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:29:46.871962 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:29:46.871972 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:29:46.871982 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:29:46.871991 | orchestrator | 2026-03-02 00:29:46.872002 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-02 00:29:46.872012 | orchestrator | Monday 02 March 2026 00:29:34 +0000 (0:00:01.791) 0:03:13.563 ********** 2026-03-02 00:29:46.872021 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:29:46.872031 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:29:46.872040 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:29:46.872050 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:29:46.872059 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:29:46.872069 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:29:46.872079 | orchestrator | changed: [testbed-manager] 2026-03-02 00:29:46.872088 | orchestrator | 2026-03-02 00:29:46.872098 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-02 00:29:46.872108 | orchestrator | Monday 02 March 2026 00:29:45 +0000 (0:00:10.834) 0:03:24.398 ********** 2026-03-02 00:29:46.872148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-02 00:29:46.872170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-02 00:29:46.872183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-02 00:29:46.872196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-02 00:29:46.872213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-02 00:29:46.872223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-02 00:29:46.872237 | orchestrator | 2026-03-02 00:29:46.872247 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-02 00:29:46.872257 | orchestrator | Monday 02 March 2026 00:29:46 +0000 (0:00:00.373) 0:03:24.771 ********** 2026-03-02 00:29:46.872267 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-02 00:29:46.872277 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:29:46.872287 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-02 00:29:46.872297 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-02 00:29:46.872307 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:29:46.872316 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:29:46.872326 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-02 00:29:46.872335 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:29:46.872345 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-02 00:29:46.872364 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-02 00:29:46.872375 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-02 00:29:46.872384 | orchestrator | 2026-03-02 00:29:46.872394 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-02 00:29:46.872409 | orchestrator | Monday 02 March 2026 00:29:46 +0000 (0:00:00.694) 0:03:25.466 ********** 2026-03-02 00:29:46.872419 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-02 00:29:46.872429 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-02 00:29:46.872439 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-02 00:29:46.872449 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-02 00:29:46.872459 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-02 00:29:46.872474 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-02 00:29:53.538761 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-02 00:29:53.538892 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-02 00:29:53.538987 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-02 00:29:53.539005 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-02 00:29:53.539017 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-02 00:29:53.539029 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-02 00:29:53.539040 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-02 00:29:53.539052 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-02 00:29:53.539089 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-02 00:29:53.539102 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-02 00:29:53.539114 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:29:53.539127 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-02 00:29:53.539138 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-02 00:29:53.539149 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-02 00:29:53.539160 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-02 00:29:53.539172 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-02 00:29:53.539183 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-02 00:29:53.539194 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-02 00:29:53.539205 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-02 00:29:53.539216 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-02 00:29:53.539227 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-02 00:29:53.539238 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-02 00:29:53.539249 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:29:53.539261 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-02 00:29:53.539272 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-02 00:29:53.539283 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-02 00:29:53.539296 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-02 00:29:53.539309 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-02 00:29:53.539322 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-02 00:29:53.539335 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-02 00:29:53.539347 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-02 00:29:53.539360 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-02 00:29:53.539373 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-02 00:29:53.539385 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-02 00:29:53.539399 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-02 00:29:53.539422 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-02 00:29:53.539469 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:29:53.539488 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:29:53.539506 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-02 00:29:53.539525 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-02 00:29:53.539543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-02 00:29:53.539579 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-02 00:29:53.539599 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-02 00:29:53.539635 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-02 00:29:53.539648 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-02 00:29:53.539661 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-02 00:29:53.539672 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-02 00:29:53.539683 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-02 00:29:53.539694 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-02 00:29:53.539705 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-02 00:29:53.539716 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-02 00:29:53.539727 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-02 00:29:53.539738 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-02 00:29:53.539749 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-02 00:29:53.539760 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-02 00:29:53.539771 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-02 00:29:53.539782 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-02 00:29:53.539793 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-02 00:29:53.539804 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-02 00:29:53.539817 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-02 00:29:53.539836 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-02 00:29:53.539868 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-02 00:29:53.539885 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-02 00:29:53.539904 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-02 00:29:53.539988 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-02 00:29:53.540007 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-02 00:29:53.540025 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-02 00:29:53.540043 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-02 00:29:53.540055 | orchestrator | 2026-03-02 00:29:53.540067 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-02 00:29:53.540079 | orchestrator | Monday 02 March 2026 00:29:51 +0000 (0:00:04.784) 0:03:30.251 ********** 2026-03-02 00:29:53.540090 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-02 00:29:53.540101 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-02 00:29:53.540112 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-02 00:29:53.540122 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-02 00:29:53.540144 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-02 00:29:53.540155 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-02 00:29:53.540166 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-02 00:29:53.540177 | orchestrator | 2026-03-02 00:29:53.540188 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-02 00:29:53.540199 | orchestrator | Monday 02 March 2026 00:29:52 +0000 (0:00:00.552) 0:03:30.804 ********** 2026-03-02 00:29:53.540209 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-02 00:29:53.540312 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-02 00:29:53.540329 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:29:53.540341 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:29:53.540352 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-02 00:29:53.540363 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:29:53.540374 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-02 00:29:53.540385 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:29:53.540396 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-02 00:29:53.540407 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-02 00:29:53.540435 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-02 00:30:06.130382 | orchestrator | 2026-03-02 00:30:06.130487 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-02 00:30:06.130497 | orchestrator | Monday 02 March 2026 00:29:53 +0000 (0:00:01.429) 0:03:32.233 ********** 2026-03-02 00:30:06.130504 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-02 00:30:06.130512 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:30:06.130519 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-02 00:30:06.130526 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-02 00:30:06.130532 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:30:06.130538 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-02 00:30:06.130545 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:30:06.130551 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:30:06.130557 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-02 00:30:06.130563 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-02 00:30:06.130570 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-02 00:30:06.130576 | orchestrator | 2026-03-02 00:30:06.130582 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-02 00:30:06.130588 | orchestrator | Monday 02 March 2026 00:29:54 +0000 (0:00:00.563) 0:03:32.797 ********** 2026-03-02 00:30:06.130595 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-02 00:30:06.130601 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:30:06.130607 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-02 00:30:06.130614 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-02 00:30:06.130620 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:30:06.130646 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:30:06.130652 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-02 00:30:06.130658 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:30:06.130664 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-02 00:30:06.130670 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-02 00:30:06.130676 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-02 00:30:06.130681 | orchestrator | 2026-03-02 00:30:06.130687 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-02 00:30:06.130693 | orchestrator | Monday 02 March 2026 00:29:54 +0000 (0:00:00.527) 0:03:33.324 ********** 2026-03-02 00:30:06.130699 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:30:06.130705 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:30:06.130712 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:30:06.130718 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:30:06.130724 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:30:06.130730 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:30:06.130736 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:30:06.130742 | orchestrator | 2026-03-02 00:30:06.130748 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-02 00:30:06.130754 | orchestrator | Monday 02 March 2026 00:29:54 +0000 (0:00:00.255) 0:03:33.580 ********** 2026-03-02 00:30:06.130761 | orchestrator | ok: [testbed-manager] 2026-03-02 00:30:06.130768 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:30:06.130774 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:30:06.130780 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:30:06.130785 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:30:06.130792 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:30:06.130797 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:30:06.130803 | orchestrator | 2026-03-02 00:30:06.130809 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-02 00:30:06.130815 | orchestrator | Monday 02 March 2026 00:30:00 +0000 (0:00:05.894) 0:03:39.475 ********** 2026-03-02 00:30:06.130821 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-02 00:30:06.130827 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-02 00:30:06.130833 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:30:06.130839 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:30:06.130845 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-02 00:30:06.130851 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:30:06.130857 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-02 00:30:06.130862 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:30:06.130868 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-02 00:30:06.130874 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-02 00:30:06.130880 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:30:06.130886 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:30:06.130892 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-02 00:30:06.130898 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:30:06.130925 | orchestrator | 2026-03-02 00:30:06.130931 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-02 00:30:06.130938 | orchestrator | Monday 02 March 2026 00:30:01 +0000 (0:00:00.299) 0:03:39.774 ********** 2026-03-02 00:30:06.130946 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-02 00:30:06.130956 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-02 00:30:06.130965 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-02 00:30:06.130989 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-02 00:30:06.130997 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-02 00:30:06.131007 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-02 00:30:06.131021 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-02 00:30:06.131027 | orchestrator | 2026-03-02 00:30:06.131034 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-02 00:30:06.131040 | orchestrator | Monday 02 March 2026 00:30:02 +0000 (0:00:01.074) 0:03:40.849 ********** 2026-03-02 00:30:06.131048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:30:06.131056 | orchestrator | 2026-03-02 00:30:06.131062 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-02 00:30:06.131068 | orchestrator | Monday 02 March 2026 00:30:02 +0000 (0:00:00.373) 0:03:41.222 ********** 2026-03-02 00:30:06.131074 | orchestrator | ok: [testbed-manager] 2026-03-02 00:30:06.131080 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:30:06.131086 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:30:06.131092 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:30:06.131097 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:30:06.131103 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:30:06.131109 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:30:06.131115 | orchestrator | 2026-03-02 00:30:06.131121 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-02 00:30:06.131127 | orchestrator | Monday 02 March 2026 00:30:03 +0000 (0:00:01.264) 0:03:42.487 ********** 2026-03-02 00:30:06.131133 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:30:06.131139 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:30:06.131146 | orchestrator | ok: [testbed-manager] 2026-03-02 00:30:06.131151 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:30:06.131157 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:30:06.131163 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:30:06.131169 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:30:06.131174 | orchestrator | 2026-03-02 00:30:06.131180 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-02 00:30:06.131187 | orchestrator | Monday 02 March 2026 00:30:04 +0000 (0:00:00.598) 0:03:43.086 ********** 2026-03-02 00:30:06.131193 | orchestrator | changed: [testbed-manager] 2026-03-02 00:30:06.131215 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:30:06.131221 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:30:06.131226 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:30:06.131232 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:30:06.131238 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:30:06.131244 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:30:06.131249 | orchestrator | 2026-03-02 00:30:06.131256 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-02 00:30:06.131263 | orchestrator | Monday 02 March 2026 00:30:04 +0000 (0:00:00.587) 0:03:43.673 ********** 2026-03-02 00:30:06.131269 | orchestrator | ok: [testbed-manager] 2026-03-02 00:30:06.131274 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:30:06.131280 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:30:06.131286 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:30:06.131292 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:30:06.131298 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:30:06.131304 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:30:06.131309 | orchestrator | 2026-03-02 00:30:06.131316 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-02 00:30:06.131323 | orchestrator | Monday 02 March 2026 00:30:05 +0000 (0:00:00.587) 0:03:44.261 ********** 2026-03-02 00:30:06.131333 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772409917.343573, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:06.131351 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772409926.092248, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:06.131358 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772409934.2727242, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:06.131378 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772409930.2882679, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:11.039198 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772409936.6918871, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:11.039307 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772409932.114079, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:11.039324 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772409935.9629312, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:11.039336 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:11.039374 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:11.039402 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:11.039414 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:11.039444 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:11.039457 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:11.039468 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 00:30:11.039481 | orchestrator | 2026-03-02 00:30:11.039495 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-02 00:30:11.039508 | orchestrator | Monday 02 March 2026 00:30:06 +0000 (0:00:00.914) 0:03:45.176 ********** 2026-03-02 00:30:11.039519 | orchestrator | changed: [testbed-manager] 2026-03-02 00:30:11.039532 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:30:11.039543 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:30:11.039562 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:30:11.039573 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:30:11.039583 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:30:11.039594 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:30:11.039605 | orchestrator | 2026-03-02 00:30:11.039616 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-02 00:30:11.039627 | orchestrator | Monday 02 March 2026 00:30:07 +0000 (0:00:01.047) 0:03:46.224 ********** 2026-03-02 00:30:11.039638 | orchestrator | changed: [testbed-manager] 2026-03-02 00:30:11.039649 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:30:11.039659 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:30:11.039670 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:30:11.039680 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:30:11.039691 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:30:11.039702 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:30:11.039713 | orchestrator | 2026-03-02 00:30:11.039724 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-02 00:30:11.039737 | orchestrator | Monday 02 March 2026 00:30:08 +0000 (0:00:01.107) 0:03:47.332 ********** 2026-03-02 00:30:11.039750 | orchestrator | changed: [testbed-manager] 2026-03-02 00:30:11.039763 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:30:11.039776 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:30:11.039789 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:30:11.039801 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:30:11.039814 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:30:11.039827 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:30:11.039840 | orchestrator | 2026-03-02 00:30:11.039854 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-02 00:30:11.039872 | orchestrator | Monday 02 March 2026 00:30:09 +0000 (0:00:00.997) 0:03:48.329 ********** 2026-03-02 00:30:11.039886 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:30:11.039899 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:30:11.039939 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:30:11.039953 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:30:11.039965 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:30:11.039978 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:30:11.039991 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:30:11.040004 | orchestrator | 2026-03-02 00:30:11.040018 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-02 00:30:11.040031 | orchestrator | Monday 02 March 2026 00:30:09 +0000 (0:00:00.292) 0:03:48.622 ********** 2026-03-02 00:30:11.040044 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:30:11.040057 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:30:11.040071 | orchestrator | ok: [testbed-manager] 2026-03-02 00:30:11.040084 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:30:11.040095 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:30:11.040105 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:30:11.040116 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:30:11.040127 | orchestrator | 2026-03-02 00:30:11.040138 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-02 00:30:11.040149 | orchestrator | Monday 02 March 2026 00:30:10 +0000 (0:00:00.729) 0:03:49.351 ********** 2026-03-02 00:30:11.040161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:30:11.040176 | orchestrator | 2026-03-02 00:30:11.040196 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-02 00:30:11.040225 | orchestrator | Monday 02 March 2026 00:30:11 +0000 (0:00:00.362) 0:03:49.714 ********** 2026-03-02 00:31:27.788496 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:27.788596 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:31:27.788608 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:31:27.788616 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:31:27.788652 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:31:27.788660 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:31:27.788668 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:31:27.788683 | orchestrator | 2026-03-02 00:31:27.788692 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-02 00:31:27.788701 | orchestrator | Monday 02 March 2026 00:30:19 +0000 (0:00:08.148) 0:03:57.862 ********** 2026-03-02 00:31:27.788709 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:27.788716 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:27.788723 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:27.788730 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:27.788738 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:27.788744 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:27.788751 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:27.788759 | orchestrator | 2026-03-02 00:31:27.788766 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-02 00:31:27.788773 | orchestrator | Monday 02 March 2026 00:30:20 +0000 (0:00:01.271) 0:03:59.134 ********** 2026-03-02 00:31:27.788781 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:27.788788 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:27.788794 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:27.788800 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:27.788806 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:27.788811 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:27.788817 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:27.788823 | orchestrator | 2026-03-02 00:31:27.788830 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-02 00:31:27.788837 | orchestrator | Monday 02 March 2026 00:30:21 +0000 (0:00:01.034) 0:04:00.169 ********** 2026-03-02 00:31:27.788844 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:27.788864 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:27.788886 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:27.788893 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:27.788900 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:27.788907 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:27.788914 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:27.788921 | orchestrator | 2026-03-02 00:31:27.788928 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-02 00:31:27.788937 | orchestrator | Monday 02 March 2026 00:30:21 +0000 (0:00:00.247) 0:04:00.416 ********** 2026-03-02 00:31:27.788944 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:27.788951 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:27.788958 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:27.788965 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:27.788972 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:27.788979 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:27.788986 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:27.788993 | orchestrator | 2026-03-02 00:31:27.789000 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-02 00:31:27.789009 | orchestrator | Monday 02 March 2026 00:30:22 +0000 (0:00:00.282) 0:04:00.698 ********** 2026-03-02 00:31:27.789018 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:27.789025 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:27.789032 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:27.789040 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:27.789047 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:27.789057 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:27.789064 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:27.789072 | orchestrator | 2026-03-02 00:31:27.789079 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-02 00:31:27.789087 | orchestrator | Monday 02 March 2026 00:30:22 +0000 (0:00:00.317) 0:04:01.015 ********** 2026-03-02 00:31:27.789095 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:27.789103 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:27.789112 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:27.789127 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:27.789136 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:27.789143 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:27.789151 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:27.789159 | orchestrator | 2026-03-02 00:31:27.789167 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-02 00:31:27.789175 | orchestrator | Monday 02 March 2026 00:30:27 +0000 (0:00:05.428) 0:04:06.444 ********** 2026-03-02 00:31:27.789184 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:31:27.789194 | orchestrator | 2026-03-02 00:31:27.789203 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-02 00:31:27.789212 | orchestrator | Monday 02 March 2026 00:30:28 +0000 (0:00:00.402) 0:04:06.846 ********** 2026-03-02 00:31:27.789219 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-02 00:31:27.789227 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-02 00:31:27.789252 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-02 00:31:27.789260 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-02 00:31:27.789268 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:31:27.789276 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-02 00:31:27.789285 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-02 00:31:27.789293 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:31:27.789300 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-02 00:31:27.789309 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-02 00:31:27.789317 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:31:27.789324 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-02 00:31:27.789333 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-02 00:31:27.789341 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:31:27.789349 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:31:27.789357 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-02 00:31:27.789380 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-02 00:31:27.789387 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:31:27.789394 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-02 00:31:27.789401 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-02 00:31:27.789408 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:31:27.789416 | orchestrator | 2026-03-02 00:31:27.789423 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-02 00:31:27.789430 | orchestrator | Monday 02 March 2026 00:30:28 +0000 (0:00:00.312) 0:04:07.159 ********** 2026-03-02 00:31:27.789437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:31:27.789445 | orchestrator | 2026-03-02 00:31:27.789452 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-02 00:31:27.789459 | orchestrator | Monday 02 March 2026 00:30:28 +0000 (0:00:00.367) 0:04:07.526 ********** 2026-03-02 00:31:27.789466 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-02 00:31:27.789473 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-02 00:31:27.789480 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:31:27.789487 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-02 00:31:27.789494 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:31:27.789501 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-02 00:31:27.789508 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:31:27.789524 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:31:27.789531 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-02 00:31:27.789553 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-02 00:31:27.789561 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:31:27.789581 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:31:27.789588 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-02 00:31:27.789595 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:31:27.789602 | orchestrator | 2026-03-02 00:31:27.789609 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-02 00:31:27.789616 | orchestrator | Monday 02 March 2026 00:30:29 +0000 (0:00:00.299) 0:04:07.826 ********** 2026-03-02 00:31:27.789623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:31:27.789631 | orchestrator | 2026-03-02 00:31:27.789638 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-02 00:31:27.789645 | orchestrator | Monday 02 March 2026 00:30:29 +0000 (0:00:00.372) 0:04:08.199 ********** 2026-03-02 00:31:27.789652 | orchestrator | changed: [testbed-manager] 2026-03-02 00:31:27.789659 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:31:27.789666 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:31:27.789685 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:31:27.789692 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:31:27.789699 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:31:27.789706 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:31:27.789713 | orchestrator | 2026-03-02 00:31:27.789720 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-02 00:31:27.789727 | orchestrator | Monday 02 March 2026 00:31:02 +0000 (0:00:33.130) 0:04:41.329 ********** 2026-03-02 00:31:27.789734 | orchestrator | changed: [testbed-manager] 2026-03-02 00:31:27.789742 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:31:27.789749 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:31:27.789756 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:31:27.789763 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:31:27.789770 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:31:27.789777 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:31:27.789784 | orchestrator | 2026-03-02 00:31:27.789794 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-02 00:31:27.789800 | orchestrator | Monday 02 March 2026 00:31:11 +0000 (0:00:08.712) 0:04:50.042 ********** 2026-03-02 00:31:27.789806 | orchestrator | changed: [testbed-manager] 2026-03-02 00:31:27.789812 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:31:27.789818 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:31:27.789825 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:31:27.789832 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:31:27.789839 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:31:27.789845 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:31:27.789852 | orchestrator | 2026-03-02 00:31:27.789859 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-02 00:31:27.789907 | orchestrator | Monday 02 March 2026 00:31:19 +0000 (0:00:08.319) 0:04:58.361 ********** 2026-03-02 00:31:27.789916 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:27.789924 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:27.789931 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:27.789938 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:27.789944 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:27.789951 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:27.789958 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:27.789965 | orchestrator | 2026-03-02 00:31:27.789972 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-02 00:31:27.789986 | orchestrator | Monday 02 March 2026 00:31:21 +0000 (0:00:01.861) 0:05:00.223 ********** 2026-03-02 00:31:27.789993 | orchestrator | changed: [testbed-manager] 2026-03-02 00:31:27.790001 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:31:27.790008 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:31:27.790068 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:31:27.790075 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:31:27.790082 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:31:27.790090 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:31:27.790097 | orchestrator | 2026-03-02 00:31:27.790112 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-02 00:31:38.231817 | orchestrator | Monday 02 March 2026 00:31:27 +0000 (0:00:06.240) 0:05:06.464 ********** 2026-03-02 00:31:38.231950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:31:38.231961 | orchestrator | 2026-03-02 00:31:38.231967 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-02 00:31:38.231973 | orchestrator | Monday 02 March 2026 00:31:28 +0000 (0:00:00.405) 0:05:06.869 ********** 2026-03-02 00:31:38.231978 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:31:38.231985 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:31:38.231989 | orchestrator | changed: [testbed-manager] 2026-03-02 00:31:38.231995 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:31:38.231999 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:31:38.232004 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:31:38.232009 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:31:38.232014 | orchestrator | 2026-03-02 00:31:38.232019 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-02 00:31:38.232023 | orchestrator | Monday 02 March 2026 00:31:28 +0000 (0:00:00.705) 0:05:07.575 ********** 2026-03-02 00:31:38.232028 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:38.232034 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:38.232039 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:38.232043 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:38.232048 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:38.232052 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:38.232057 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:38.232062 | orchestrator | 2026-03-02 00:31:38.232066 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-02 00:31:38.232071 | orchestrator | Monday 02 March 2026 00:31:30 +0000 (0:00:01.684) 0:05:09.259 ********** 2026-03-02 00:31:38.232076 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:31:38.232080 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:31:38.232085 | orchestrator | changed: [testbed-manager] 2026-03-02 00:31:38.232090 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:31:38.232094 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:31:38.232099 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:31:38.232103 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:31:38.232108 | orchestrator | 2026-03-02 00:31:38.232113 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-02 00:31:38.232117 | orchestrator | Monday 02 March 2026 00:31:31 +0000 (0:00:00.745) 0:05:10.005 ********** 2026-03-02 00:31:38.232122 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:31:38.232127 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:31:38.232131 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:31:38.232136 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:31:38.232140 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:31:38.232145 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:31:38.232149 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:31:38.232154 | orchestrator | 2026-03-02 00:31:38.232158 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-02 00:31:38.232163 | orchestrator | Monday 02 March 2026 00:31:31 +0000 (0:00:00.259) 0:05:10.264 ********** 2026-03-02 00:31:38.232187 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:31:38.232196 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:31:38.232203 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:31:38.232211 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:31:38.232217 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:31:38.232224 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:31:38.232231 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:31:38.232238 | orchestrator | 2026-03-02 00:31:38.232245 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-02 00:31:38.232253 | orchestrator | Monday 02 March 2026 00:31:31 +0000 (0:00:00.359) 0:05:10.624 ********** 2026-03-02 00:31:38.232260 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:38.232267 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:38.232274 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:38.232281 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:38.232285 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:38.232289 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:38.232304 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:38.232308 | orchestrator | 2026-03-02 00:31:38.232313 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-02 00:31:38.232317 | orchestrator | Monday 02 March 2026 00:31:32 +0000 (0:00:00.270) 0:05:10.894 ********** 2026-03-02 00:31:38.232322 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:31:38.232326 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:31:38.232330 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:31:38.232334 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:31:38.232339 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:31:38.232343 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:31:38.232347 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:31:38.232351 | orchestrator | 2026-03-02 00:31:38.232356 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-02 00:31:38.232361 | orchestrator | Monday 02 March 2026 00:31:32 +0000 (0:00:00.278) 0:05:11.172 ********** 2026-03-02 00:31:38.232366 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:38.232370 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:38.232374 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:38.232378 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:38.232383 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:38.232387 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:38.232391 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:38.232396 | orchestrator | 2026-03-02 00:31:38.232401 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-02 00:31:38.232407 | orchestrator | Monday 02 March 2026 00:31:32 +0000 (0:00:00.282) 0:05:11.455 ********** 2026-03-02 00:31:38.232412 | orchestrator | ok: [testbed-node-3] =>  2026-03-02 00:31:38.232417 | orchestrator |  docker_version: 5:27.5.1 2026-03-02 00:31:38.232422 | orchestrator | ok: [testbed-node-4] =>  2026-03-02 00:31:38.232428 | orchestrator |  docker_version: 5:27.5.1 2026-03-02 00:31:38.232433 | orchestrator | ok: [testbed-node-5] =>  2026-03-02 00:31:38.232438 | orchestrator |  docker_version: 5:27.5.1 2026-03-02 00:31:38.232443 | orchestrator | ok: [testbed-manager] =>  2026-03-02 00:31:38.232448 | orchestrator |  docker_version: 5:27.5.1 2026-03-02 00:31:38.232465 | orchestrator | ok: [testbed-node-0] =>  2026-03-02 00:31:38.232471 | orchestrator |  docker_version: 5:27.5.1 2026-03-02 00:31:38.232476 | orchestrator | ok: [testbed-node-1] =>  2026-03-02 00:31:38.232482 | orchestrator |  docker_version: 5:27.5.1 2026-03-02 00:31:38.232487 | orchestrator | ok: [testbed-node-2] =>  2026-03-02 00:31:38.232492 | orchestrator |  docker_version: 5:27.5.1 2026-03-02 00:31:38.232497 | orchestrator | 2026-03-02 00:31:38.232503 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-02 00:31:38.232508 | orchestrator | Monday 02 March 2026 00:31:33 +0000 (0:00:00.239) 0:05:11.694 ********** 2026-03-02 00:31:38.232513 | orchestrator | ok: [testbed-node-3] =>  2026-03-02 00:31:38.232524 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-02 00:31:38.232530 | orchestrator | ok: [testbed-node-4] =>  2026-03-02 00:31:38.232535 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-02 00:31:38.232540 | orchestrator | ok: [testbed-node-5] =>  2026-03-02 00:31:38.232545 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-02 00:31:38.232550 | orchestrator | ok: [testbed-manager] =>  2026-03-02 00:31:38.232555 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-02 00:31:38.232560 | orchestrator | ok: [testbed-node-0] =>  2026-03-02 00:31:38.232565 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-02 00:31:38.232571 | orchestrator | ok: [testbed-node-1] =>  2026-03-02 00:31:38.232576 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-02 00:31:38.232581 | orchestrator | ok: [testbed-node-2] =>  2026-03-02 00:31:38.232586 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-02 00:31:38.232591 | orchestrator | 2026-03-02 00:31:38.232597 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-02 00:31:38.232602 | orchestrator | Monday 02 March 2026 00:31:33 +0000 (0:00:00.246) 0:05:11.941 ********** 2026-03-02 00:31:38.232607 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:31:38.232612 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:31:38.232617 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:31:38.232622 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:31:38.232627 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:31:38.232632 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:31:38.232637 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:31:38.232642 | orchestrator | 2026-03-02 00:31:38.232648 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-02 00:31:38.232653 | orchestrator | Monday 02 March 2026 00:31:33 +0000 (0:00:00.229) 0:05:12.170 ********** 2026-03-02 00:31:38.232658 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:31:38.232664 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:31:38.232669 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:31:38.232674 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:31:38.232679 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:31:38.232684 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:31:38.232689 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:31:38.232694 | orchestrator | 2026-03-02 00:31:38.232700 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-02 00:31:38.232705 | orchestrator | Monday 02 March 2026 00:31:33 +0000 (0:00:00.353) 0:05:12.524 ********** 2026-03-02 00:31:38.232711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:31:38.232718 | orchestrator | 2026-03-02 00:31:38.232723 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-02 00:31:38.232728 | orchestrator | Monday 02 March 2026 00:31:34 +0000 (0:00:00.401) 0:05:12.925 ********** 2026-03-02 00:31:38.232733 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:38.232739 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:38.232744 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:38.232749 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:38.232755 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:38.232760 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:38.232765 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:38.232770 | orchestrator | 2026-03-02 00:31:38.232775 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-02 00:31:38.232781 | orchestrator | Monday 02 March 2026 00:31:35 +0000 (0:00:00.826) 0:05:13.751 ********** 2026-03-02 00:31:38.232785 | orchestrator | ok: [testbed-manager] 2026-03-02 00:31:38.232792 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:31:38.232797 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:31:38.232802 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:31:38.232806 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:31:38.232814 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:31:38.232818 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:31:38.232823 | orchestrator | 2026-03-02 00:31:38.232827 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-02 00:31:38.232832 | orchestrator | Monday 02 March 2026 00:31:37 +0000 (0:00:02.789) 0:05:16.541 ********** 2026-03-02 00:31:38.232837 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-02 00:31:38.232842 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-02 00:31:38.232847 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-02 00:31:38.232851 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-02 00:31:38.232856 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-02 00:31:38.232877 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-02 00:31:38.232883 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:31:38.232887 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-02 00:31:38.232892 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-02 00:31:38.232896 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-02 00:31:38.232901 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:31:38.232905 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-02 00:31:38.232910 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-02 00:31:38.232914 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-02 00:31:38.232919 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:31:38.232924 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-02 00:31:38.232932 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-02 00:32:40.563966 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-02 00:32:40.564093 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:32:40.564113 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-02 00:32:40.564128 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-02 00:32:40.564141 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:32:40.564153 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-02 00:32:40.564166 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:32:40.564180 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-02 00:32:40.564195 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-02 00:32:40.564209 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-02 00:32:40.564222 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:32:40.564235 | orchestrator | 2026-03-02 00:32:40.564249 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-02 00:32:40.564263 | orchestrator | Monday 02 March 2026 00:31:38 +0000 (0:00:00.584) 0:05:17.125 ********** 2026-03-02 00:32:40.564276 | orchestrator | ok: [testbed-manager] 2026-03-02 00:32:40.564288 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:40.564300 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:32:40.564315 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:32:40.564329 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:32:40.564343 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:32:40.564356 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:32:40.564369 | orchestrator | 2026-03-02 00:32:40.564383 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-02 00:32:40.564397 | orchestrator | Monday 02 March 2026 00:31:45 +0000 (0:00:07.128) 0:05:24.253 ********** 2026-03-02 00:32:40.564411 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:40.564425 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:32:40.564439 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:32:40.564453 | orchestrator | ok: [testbed-manager] 2026-03-02 00:32:40.564466 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:32:40.564480 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:32:40.564523 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:32:40.564538 | orchestrator | 2026-03-02 00:32:40.564552 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-02 00:32:40.564566 | orchestrator | Monday 02 March 2026 00:31:46 +0000 (0:00:01.029) 0:05:25.283 ********** 2026-03-02 00:32:40.564581 | orchestrator | ok: [testbed-manager] 2026-03-02 00:32:40.564595 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:40.564609 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:32:40.564619 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:32:40.564629 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:32:40.564672 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:32:40.564681 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:32:40.564691 | orchestrator | 2026-03-02 00:32:40.564700 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-02 00:32:40.564710 | orchestrator | Monday 02 March 2026 00:31:54 +0000 (0:00:08.371) 0:05:33.655 ********** 2026-03-02 00:32:40.564720 | orchestrator | changed: [testbed-manager] 2026-03-02 00:32:40.564730 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:40.564738 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:32:40.564747 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:32:40.564757 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:32:40.564766 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:32:40.564776 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:32:40.564785 | orchestrator | 2026-03-02 00:32:40.564794 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-02 00:32:40.564804 | orchestrator | Monday 02 March 2026 00:31:58 +0000 (0:00:03.211) 0:05:36.866 ********** 2026-03-02 00:32:40.564814 | orchestrator | ok: [testbed-manager] 2026-03-02 00:32:40.564823 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:40.564857 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:32:40.564871 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:32:40.564883 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:32:40.564896 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:32:40.564909 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:32:40.564924 | orchestrator | 2026-03-02 00:32:40.564933 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-02 00:32:40.564955 | orchestrator | Monday 02 March 2026 00:31:59 +0000 (0:00:01.497) 0:05:38.364 ********** 2026-03-02 00:32:40.564963 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:40.564971 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:32:40.564979 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:32:40.564987 | orchestrator | ok: [testbed-manager] 2026-03-02 00:32:40.564995 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:32:40.565006 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:32:40.565018 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:32:40.565031 | orchestrator | 2026-03-02 00:32:40.565044 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-02 00:32:40.565058 | orchestrator | Monday 02 March 2026 00:32:01 +0000 (0:00:01.352) 0:05:39.716 ********** 2026-03-02 00:32:40.565070 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:32:40.565083 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:32:40.565096 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:32:40.565110 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:32:40.565123 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:32:40.565135 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:32:40.565148 | orchestrator | changed: [testbed-manager] 2026-03-02 00:32:40.565160 | orchestrator | 2026-03-02 00:32:40.565174 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-02 00:32:40.565188 | orchestrator | Monday 02 March 2026 00:32:01 +0000 (0:00:00.779) 0:05:40.496 ********** 2026-03-02 00:32:40.565202 | orchestrator | ok: [testbed-manager] 2026-03-02 00:32:40.565215 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:32:40.565227 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:32:40.565244 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:32:40.565251 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:32:40.565257 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:40.565264 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:32:40.565271 | orchestrator | 2026-03-02 00:32:40.565278 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-02 00:32:40.565303 | orchestrator | Monday 02 March 2026 00:32:11 +0000 (0:00:10.091) 0:05:50.587 ********** 2026-03-02 00:32:40.565310 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:40.565317 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:32:40.565324 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:32:40.565330 | orchestrator | changed: [testbed-manager] 2026-03-02 00:32:40.565337 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:32:40.565343 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:32:40.565350 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:32:40.565361 | orchestrator | 2026-03-02 00:32:40.565372 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-02 00:32:40.565383 | orchestrator | Monday 02 March 2026 00:32:12 +0000 (0:00:00.899) 0:05:51.487 ********** 2026-03-02 00:32:40.565394 | orchestrator | ok: [testbed-manager] 2026-03-02 00:32:40.565405 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:40.565417 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:32:40.565428 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:32:40.565439 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:32:40.565450 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:32:40.565456 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:32:40.565463 | orchestrator | 2026-03-02 00:32:40.565470 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-02 00:32:40.565477 | orchestrator | Monday 02 March 2026 00:32:22 +0000 (0:00:09.442) 0:06:00.929 ********** 2026-03-02 00:32:40.565484 | orchestrator | ok: [testbed-manager] 2026-03-02 00:32:40.565490 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:32:40.565497 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:32:40.565504 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:32:40.565511 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:32:40.565517 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:40.565524 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:32:40.565531 | orchestrator | 2026-03-02 00:32:40.565543 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-02 00:32:40.565554 | orchestrator | Monday 02 March 2026 00:32:33 +0000 (0:00:11.489) 0:06:12.419 ********** 2026-03-02 00:32:40.565566 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-02 00:32:40.565578 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-02 00:32:40.565590 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-02 00:32:40.565601 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-02 00:32:40.565613 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-02 00:32:40.565625 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-02 00:32:40.565635 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-02 00:32:40.565646 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-02 00:32:40.565656 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-02 00:32:40.565666 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-02 00:32:40.565676 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-02 00:32:40.565687 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-02 00:32:40.565698 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-02 00:32:40.565707 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-02 00:32:40.565718 | orchestrator | 2026-03-02 00:32:40.565730 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-02 00:32:40.565742 | orchestrator | Monday 02 March 2026 00:32:34 +0000 (0:00:01.200) 0:06:13.620 ********** 2026-03-02 00:32:40.565751 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:32:40.565771 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:32:40.565782 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:32:40.565793 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:32:40.565803 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:32:40.565813 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:32:40.565823 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:32:40.565858 | orchestrator | 2026-03-02 00:32:40.565871 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-02 00:32:40.565882 | orchestrator | Monday 02 March 2026 00:32:35 +0000 (0:00:00.497) 0:06:14.117 ********** 2026-03-02 00:32:40.565894 | orchestrator | ok: [testbed-manager] 2026-03-02 00:32:40.565905 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:32:40.565934 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:32:40.565946 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:32:40.565971 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:40.565982 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:32:40.565991 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:32:40.566001 | orchestrator | 2026-03-02 00:32:40.566011 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-02 00:32:40.566114 | orchestrator | Monday 02 March 2026 00:32:39 +0000 (0:00:04.173) 0:06:18.290 ********** 2026-03-02 00:32:40.566124 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:32:40.566134 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:32:40.566143 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:32:40.566152 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:32:40.566162 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:32:40.566172 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:32:40.566182 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:32:40.566191 | orchestrator | 2026-03-02 00:32:40.566202 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-02 00:32:40.566213 | orchestrator | Monday 02 March 2026 00:32:40 +0000 (0:00:00.667) 0:06:18.958 ********** 2026-03-02 00:32:40.566222 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-02 00:32:40.566233 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-02 00:32:40.566291 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:32:40.566303 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-02 00:32:40.566312 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-02 00:32:40.566322 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:32:40.566332 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-02 00:32:40.566342 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-02 00:32:40.566351 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:32:40.566376 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-02 00:32:59.998785 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-02 00:32:59.998998 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:32:59.999023 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-02 00:32:59.999037 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-02 00:32:59.999048 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:32:59.999065 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-02 00:32:59.999084 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-02 00:32:59.999102 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:32:59.999113 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-02 00:32:59.999125 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-02 00:32:59.999136 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:32:59.999147 | orchestrator | 2026-03-02 00:32:59.999161 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-02 00:32:59.999199 | orchestrator | Monday 02 March 2026 00:32:40 +0000 (0:00:00.566) 0:06:19.524 ********** 2026-03-02 00:32:59.999211 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:32:59.999222 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:32:59.999233 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:32:59.999244 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:32:59.999261 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:32:59.999281 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:32:59.999299 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:32:59.999320 | orchestrator | 2026-03-02 00:32:59.999340 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-02 00:32:59.999361 | orchestrator | Monday 02 March 2026 00:32:41 +0000 (0:00:00.503) 0:06:20.027 ********** 2026-03-02 00:32:59.999375 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:32:59.999388 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:32:59.999401 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:32:59.999413 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:32:59.999426 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:32:59.999438 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:32:59.999451 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:32:59.999464 | orchestrator | 2026-03-02 00:32:59.999477 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-02 00:32:59.999491 | orchestrator | Monday 02 March 2026 00:32:41 +0000 (0:00:00.486) 0:06:20.513 ********** 2026-03-02 00:32:59.999503 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:32:59.999516 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:32:59.999529 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:32:59.999541 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:32:59.999553 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:32:59.999567 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:32:59.999579 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:32:59.999592 | orchestrator | 2026-03-02 00:32:59.999612 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-02 00:32:59.999632 | orchestrator | Monday 02 March 2026 00:32:42 +0000 (0:00:00.509) 0:06:21.022 ********** 2026-03-02 00:32:59.999651 | orchestrator | ok: [testbed-manager] 2026-03-02 00:32:59.999671 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:32:59.999682 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:32:59.999693 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:32:59.999704 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:32:59.999803 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:32:59.999849 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:32:59.999866 | orchestrator | 2026-03-02 00:32:59.999881 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-02 00:32:59.999899 | orchestrator | Monday 02 March 2026 00:32:44 +0000 (0:00:02.083) 0:06:23.106 ********** 2026-03-02 00:32:59.999916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:32:59.999930 | orchestrator | 2026-03-02 00:32:59.999957 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-02 00:32:59.999968 | orchestrator | Monday 02 March 2026 00:32:45 +0000 (0:00:00.790) 0:06:23.897 ********** 2026-03-02 00:32:59.999979 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:32:59.999991 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:00.000001 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:00.000013 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:00.000025 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:00.000037 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:00.000048 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:00.000059 | orchestrator | 2026-03-02 00:33:00.000070 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-02 00:33:00.000094 | orchestrator | Monday 02 March 2026 00:32:46 +0000 (0:00:00.845) 0:06:24.743 ********** 2026-03-02 00:33:00.000106 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:00.000117 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:00.000128 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:00.000139 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:00.000150 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:00.000161 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:00.000172 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:00.000183 | orchestrator | 2026-03-02 00:33:00.000194 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-02 00:33:00.000205 | orchestrator | Monday 02 March 2026 00:32:47 +0000 (0:00:01.044) 0:06:25.787 ********** 2026-03-02 00:33:00.000216 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:00.000226 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:00.000237 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:00.000248 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:00.000259 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:00.000270 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:00.000281 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:00.000292 | orchestrator | 2026-03-02 00:33:00.000303 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-02 00:33:00.000338 | orchestrator | Monday 02 March 2026 00:32:48 +0000 (0:00:01.273) 0:06:27.061 ********** 2026-03-02 00:33:00.000349 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:33:00.000361 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:00.000372 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:00.000383 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:00.000393 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:00.000404 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:00.000415 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:00.000426 | orchestrator | 2026-03-02 00:33:00.000437 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-02 00:33:00.000449 | orchestrator | Monday 02 March 2026 00:32:49 +0000 (0:00:01.601) 0:06:28.663 ********** 2026-03-02 00:33:00.000461 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:00.000479 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:00.000498 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:00.000520 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:00.000545 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:00.000562 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:00.000579 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:00.000597 | orchestrator | 2026-03-02 00:33:00.000614 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-02 00:33:00.000631 | orchestrator | Monday 02 March 2026 00:32:51 +0000 (0:00:01.265) 0:06:29.929 ********** 2026-03-02 00:33:00.000648 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:00.000666 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:00.000684 | orchestrator | changed: [testbed-manager] 2026-03-02 00:33:00.000701 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:00.000716 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:00.000734 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:00.000751 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:00.000770 | orchestrator | 2026-03-02 00:33:00.000788 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-02 00:33:00.000805 | orchestrator | Monday 02 March 2026 00:32:52 +0000 (0:00:01.487) 0:06:31.416 ********** 2026-03-02 00:33:00.000851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:33:00.000871 | orchestrator | 2026-03-02 00:33:00.000887 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-02 00:33:00.000903 | orchestrator | Monday 02 March 2026 00:32:53 +0000 (0:00:00.972) 0:06:32.388 ********** 2026-03-02 00:33:00.000941 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:00.000960 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:00.000979 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:00.000997 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:00.001014 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:00.001031 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:00.001050 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:00.001068 | orchestrator | 2026-03-02 00:33:00.001086 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-02 00:33:00.001102 | orchestrator | Monday 02 March 2026 00:32:55 +0000 (0:00:01.434) 0:06:33.823 ********** 2026-03-02 00:33:00.001120 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:00.001138 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:00.001155 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:00.001173 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:00.001191 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:00.001209 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:00.001226 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:00.001244 | orchestrator | 2026-03-02 00:33:00.001263 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-02 00:33:00.001282 | orchestrator | Monday 02 March 2026 00:32:56 +0000 (0:00:01.137) 0:06:34.960 ********** 2026-03-02 00:33:00.001300 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:00.001316 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:00.001334 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:00.001353 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:00.001371 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:00.001391 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:00.001410 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:00.001429 | orchestrator | 2026-03-02 00:33:00.001452 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-02 00:33:00.001480 | orchestrator | Monday 02 March 2026 00:32:57 +0000 (0:00:01.242) 0:06:36.203 ********** 2026-03-02 00:33:00.001498 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:00.001518 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:00.001537 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:00.001555 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:00.001573 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:00.001585 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:00.001596 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:00.001607 | orchestrator | 2026-03-02 00:33:00.001618 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-02 00:33:00.001629 | orchestrator | Monday 02 March 2026 00:32:59 +0000 (0:00:01.507) 0:06:37.710 ********** 2026-03-02 00:33:00.001641 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:33:00.001654 | orchestrator | 2026-03-02 00:33:00.001665 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-02 00:33:00.001677 | orchestrator | Monday 02 March 2026 00:32:59 +0000 (0:00:00.832) 0:06:38.543 ********** 2026-03-02 00:33:00.001688 | orchestrator | 2026-03-02 00:33:00.001699 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-02 00:33:00.001710 | orchestrator | Monday 02 March 2026 00:32:59 +0000 (0:00:00.039) 0:06:38.582 ********** 2026-03-02 00:33:00.001721 | orchestrator | 2026-03-02 00:33:00.001732 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-02 00:33:00.001743 | orchestrator | Monday 02 March 2026 00:32:59 +0000 (0:00:00.037) 0:06:38.620 ********** 2026-03-02 00:33:00.001754 | orchestrator | 2026-03-02 00:33:00.001765 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-02 00:33:00.001794 | orchestrator | Monday 02 March 2026 00:32:59 +0000 (0:00:00.052) 0:06:38.673 ********** 2026-03-02 00:33:26.282298 | orchestrator | 2026-03-02 00:33:26.282416 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-02 00:33:26.282458 | orchestrator | Monday 02 March 2026 00:33:00 +0000 (0:00:00.047) 0:06:38.720 ********** 2026-03-02 00:33:26.282470 | orchestrator | 2026-03-02 00:33:26.282482 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-02 00:33:26.282493 | orchestrator | Monday 02 March 2026 00:33:00 +0000 (0:00:00.039) 0:06:38.759 ********** 2026-03-02 00:33:26.282504 | orchestrator | 2026-03-02 00:33:26.282516 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-02 00:33:26.282527 | orchestrator | Monday 02 March 2026 00:33:00 +0000 (0:00:00.046) 0:06:38.806 ********** 2026-03-02 00:33:26.282538 | orchestrator | 2026-03-02 00:33:26.282549 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-02 00:33:26.282560 | orchestrator | Monday 02 March 2026 00:33:00 +0000 (0:00:00.038) 0:06:38.844 ********** 2026-03-02 00:33:26.282571 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:26.282583 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:26.282594 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:26.282605 | orchestrator | 2026-03-02 00:33:26.282616 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-02 00:33:26.282628 | orchestrator | Monday 02 March 2026 00:33:01 +0000 (0:00:01.195) 0:06:40.040 ********** 2026-03-02 00:33:26.282639 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:26.282650 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:26.282661 | orchestrator | changed: [testbed-manager] 2026-03-02 00:33:26.282672 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:26.282683 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:26.282694 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:26.282705 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:26.282717 | orchestrator | 2026-03-02 00:33:26.282728 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-02 00:33:26.282739 | orchestrator | Monday 02 March 2026 00:33:02 +0000 (0:00:01.547) 0:06:41.587 ********** 2026-03-02 00:33:26.282750 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:26.282761 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:26.282772 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:26.282783 | orchestrator | changed: [testbed-manager] 2026-03-02 00:33:26.282794 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:26.282910 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:26.282924 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:26.282937 | orchestrator | 2026-03-02 00:33:26.282950 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-02 00:33:26.282963 | orchestrator | Monday 02 March 2026 00:33:04 +0000 (0:00:01.250) 0:06:42.838 ********** 2026-03-02 00:33:26.282975 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:33:26.282987 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:26.282999 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:26.283011 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:26.283024 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:26.283037 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:26.283049 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:26.283060 | orchestrator | 2026-03-02 00:33:26.283071 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-02 00:33:26.283082 | orchestrator | Monday 02 March 2026 00:33:06 +0000 (0:00:02.282) 0:06:45.120 ********** 2026-03-02 00:33:26.283093 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:33:26.283104 | orchestrator | 2026-03-02 00:33:26.283115 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-02 00:33:26.283126 | orchestrator | Monday 02 March 2026 00:33:06 +0000 (0:00:00.093) 0:06:45.213 ********** 2026-03-02 00:33:26.283137 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:26.283148 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:26.283159 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:26.283169 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:26.283181 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:26.283201 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:26.283213 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:26.283224 | orchestrator | 2026-03-02 00:33:26.283235 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-02 00:33:26.283261 | orchestrator | Monday 02 March 2026 00:33:07 +0000 (0:00:01.061) 0:06:46.275 ********** 2026-03-02 00:33:26.283272 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:33:26.283283 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:33:26.283294 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:33:26.283305 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:33:26.283316 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:33:26.283327 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:33:26.283337 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:33:26.283348 | orchestrator | 2026-03-02 00:33:26.283359 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-02 00:33:26.283370 | orchestrator | Monday 02 March 2026 00:33:08 +0000 (0:00:00.728) 0:06:47.004 ********** 2026-03-02 00:33:26.283383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:33:26.283396 | orchestrator | 2026-03-02 00:33:26.283407 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-02 00:33:26.283418 | orchestrator | Monday 02 March 2026 00:33:09 +0000 (0:00:00.880) 0:06:47.885 ********** 2026-03-02 00:33:26.283429 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:26.283440 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:26.283451 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:26.283462 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:26.283473 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:26.283484 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:26.283494 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:26.283505 | orchestrator | 2026-03-02 00:33:26.283516 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-02 00:33:26.283527 | orchestrator | Monday 02 March 2026 00:33:10 +0000 (0:00:00.877) 0:06:48.762 ********** 2026-03-02 00:33:26.283538 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-02 00:33:26.283568 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-02 00:33:26.283580 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-02 00:33:26.283592 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-02 00:33:26.283603 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-02 00:33:26.283614 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-02 00:33:26.283625 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-02 00:33:26.283636 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-02 00:33:26.283647 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-02 00:33:26.283658 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-02 00:33:26.283669 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-02 00:33:26.283680 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-02 00:33:26.283691 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-02 00:33:26.283702 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-02 00:33:26.283713 | orchestrator | 2026-03-02 00:33:26.283724 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-02 00:33:26.283735 | orchestrator | Monday 02 March 2026 00:33:12 +0000 (0:00:02.714) 0:06:51.476 ********** 2026-03-02 00:33:26.283746 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:33:26.283758 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:33:26.283768 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:33:26.283779 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:33:26.283830 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:33:26.283843 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:33:26.283854 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:33:26.283865 | orchestrator | 2026-03-02 00:33:26.283877 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-02 00:33:26.283888 | orchestrator | Monday 02 March 2026 00:33:13 +0000 (0:00:00.462) 0:06:51.938 ********** 2026-03-02 00:33:26.283900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:33:26.283913 | orchestrator | 2026-03-02 00:33:26.283924 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-02 00:33:26.283935 | orchestrator | Monday 02 March 2026 00:33:14 +0000 (0:00:00.781) 0:06:52.720 ********** 2026-03-02 00:33:26.283946 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:26.283957 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:26.283968 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:26.283979 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:26.283990 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:26.284001 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:26.284012 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:26.284023 | orchestrator | 2026-03-02 00:33:26.284033 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-02 00:33:26.284045 | orchestrator | Monday 02 March 2026 00:33:14 +0000 (0:00:00.836) 0:06:53.556 ********** 2026-03-02 00:33:26.284064 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:26.284134 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:26.284158 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:26.284177 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:26.284195 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:26.284279 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:26.284298 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:26.284316 | orchestrator | 2026-03-02 00:33:26.284334 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-02 00:33:26.284354 | orchestrator | Monday 02 March 2026 00:33:15 +0000 (0:00:01.009) 0:06:54.566 ********** 2026-03-02 00:33:26.284373 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:33:26.284389 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:33:26.284407 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:33:26.284436 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:33:26.284456 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:33:26.284475 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:33:26.284495 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:33:26.284514 | orchestrator | 2026-03-02 00:33:26.284533 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-02 00:33:26.284552 | orchestrator | Monday 02 March 2026 00:33:16 +0000 (0:00:00.493) 0:06:55.059 ********** 2026-03-02 00:33:26.284572 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:26.284591 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:26.284611 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:26.284630 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:26.284649 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:26.284668 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:26.284687 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:26.284707 | orchestrator | 2026-03-02 00:33:26.284726 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-02 00:33:26.284745 | orchestrator | Monday 02 March 2026 00:33:17 +0000 (0:00:01.435) 0:06:56.494 ********** 2026-03-02 00:33:26.284764 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:33:26.284784 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:33:26.284869 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:33:26.284892 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:33:26.284911 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:33:26.284945 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:33:26.284965 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:33:26.284985 | orchestrator | 2026-03-02 00:33:26.285005 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-02 00:33:26.285024 | orchestrator | Monday 02 March 2026 00:33:18 +0000 (0:00:00.506) 0:06:57.001 ********** 2026-03-02 00:33:26.285044 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:26.285064 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:26.285084 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:26.285103 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:26.285123 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:26.285142 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:26.285177 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:58.007250 | orchestrator | 2026-03-02 00:33:58.007339 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-02 00:33:58.007353 | orchestrator | Monday 02 March 2026 00:33:26 +0000 (0:00:08.021) 0:07:05.022 ********** 2026-03-02 00:33:58.007359 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:58.007365 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:58.007370 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:58.007375 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:58.007379 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:58.007384 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:58.007388 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:58.007392 | orchestrator | 2026-03-02 00:33:58.007397 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-02 00:33:58.007402 | orchestrator | Monday 02 March 2026 00:33:27 +0000 (0:00:01.444) 0:07:06.467 ********** 2026-03-02 00:33:58.007406 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:58.007410 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:58.007415 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:58.007419 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:58.007423 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:58.007428 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:58.007432 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:58.007436 | orchestrator | 2026-03-02 00:33:58.007441 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-02 00:33:58.007445 | orchestrator | Monday 02 March 2026 00:33:29 +0000 (0:00:01.727) 0:07:08.194 ********** 2026-03-02 00:33:58.007449 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:58.007454 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:58.007458 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:58.007462 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:58.007466 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:58.007470 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:58.007474 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:58.007479 | orchestrator | 2026-03-02 00:33:58.007483 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-02 00:33:58.007487 | orchestrator | Monday 02 March 2026 00:33:31 +0000 (0:00:01.750) 0:07:09.945 ********** 2026-03-02 00:33:58.007492 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:58.007499 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:58.007505 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:58.007511 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:58.007518 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:58.007524 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:58.007530 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:58.007537 | orchestrator | 2026-03-02 00:33:58.007546 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-02 00:33:58.007555 | orchestrator | Monday 02 March 2026 00:33:32 +0000 (0:00:01.070) 0:07:11.015 ********** 2026-03-02 00:33:58.007563 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:33:58.007572 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:33:58.007583 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:33:58.007620 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:33:58.007627 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:33:58.007633 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:33:58.007640 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:33:58.007647 | orchestrator | 2026-03-02 00:33:58.007653 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-02 00:33:58.007660 | orchestrator | Monday 02 March 2026 00:33:33 +0000 (0:00:00.771) 0:07:11.786 ********** 2026-03-02 00:33:58.007666 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:33:58.007672 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:33:58.007679 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:33:58.007686 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:33:58.007693 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:33:58.007700 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:33:58.007707 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:33:58.007712 | orchestrator | 2026-03-02 00:33:58.007717 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-02 00:33:58.007721 | orchestrator | Monday 02 March 2026 00:33:33 +0000 (0:00:00.522) 0:07:12.309 ********** 2026-03-02 00:33:58.007725 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:58.007729 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:58.007733 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:58.007737 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:58.007742 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:58.007746 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:58.007750 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:58.007754 | orchestrator | 2026-03-02 00:33:58.007758 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-02 00:33:58.007801 | orchestrator | Monday 02 March 2026 00:33:34 +0000 (0:00:00.487) 0:07:12.797 ********** 2026-03-02 00:33:58.007806 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:58.007810 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:58.007815 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:58.007821 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:58.007828 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:58.007834 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:58.007841 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:58.007848 | orchestrator | 2026-03-02 00:33:58.007855 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-02 00:33:58.007861 | orchestrator | Monday 02 March 2026 00:33:34 +0000 (0:00:00.658) 0:07:13.455 ********** 2026-03-02 00:33:58.007868 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:58.007875 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:58.007882 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:58.007889 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:58.007896 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:58.007903 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:58.007910 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:58.007917 | orchestrator | 2026-03-02 00:33:58.007924 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-02 00:33:58.007932 | orchestrator | Monday 02 March 2026 00:33:35 +0000 (0:00:00.498) 0:07:13.953 ********** 2026-03-02 00:33:58.007938 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:58.007943 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:58.007948 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:58.007953 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:58.007958 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:58.007963 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:58.007968 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:58.007973 | orchestrator | 2026-03-02 00:33:58.007993 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-02 00:33:58.007998 | orchestrator | Monday 02 March 2026 00:33:39 +0000 (0:00:04.457) 0:07:18.411 ********** 2026-03-02 00:33:58.008003 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:33:58.008009 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:33:58.008034 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:33:58.008040 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:33:58.008045 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:33:58.008049 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:33:58.008055 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:33:58.008060 | orchestrator | 2026-03-02 00:33:58.008065 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-02 00:33:58.008070 | orchestrator | Monday 02 March 2026 00:33:40 +0000 (0:00:00.513) 0:07:18.924 ********** 2026-03-02 00:33:58.008078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:33:58.008085 | orchestrator | 2026-03-02 00:33:58.008090 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-02 00:33:58.008095 | orchestrator | Monday 02 March 2026 00:33:41 +0000 (0:00:00.996) 0:07:19.921 ********** 2026-03-02 00:33:58.008100 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:58.008105 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:58.008110 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:58.008115 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:58.008120 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:58.008125 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:58.008129 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:58.008134 | orchestrator | 2026-03-02 00:33:58.008139 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-02 00:33:58.008144 | orchestrator | Monday 02 March 2026 00:33:43 +0000 (0:00:02.149) 0:07:22.070 ********** 2026-03-02 00:33:58.008149 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:58.008155 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:58.008160 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:58.008165 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:58.008170 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:58.008176 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:58.008181 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:58.008185 | orchestrator | 2026-03-02 00:33:58.008189 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-02 00:33:58.008194 | orchestrator | Monday 02 March 2026 00:33:44 +0000 (0:00:01.135) 0:07:23.206 ********** 2026-03-02 00:33:58.008198 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:33:58.008202 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:33:58.008206 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:33:58.008211 | orchestrator | ok: [testbed-manager] 2026-03-02 00:33:58.008215 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:33:58.008219 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:33:58.008223 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:33:58.008227 | orchestrator | 2026-03-02 00:33:58.008232 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-02 00:33:58.008236 | orchestrator | Monday 02 March 2026 00:33:45 +0000 (0:00:00.853) 0:07:24.059 ********** 2026-03-02 00:33:58.008241 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-02 00:33:58.008246 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-02 00:33:58.008251 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-02 00:33:58.008255 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-02 00:33:58.008262 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-02 00:33:58.008266 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-02 00:33:58.008274 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-02 00:33:58.008278 | orchestrator | 2026-03-02 00:33:58.008282 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-02 00:33:58.008286 | orchestrator | Monday 02 March 2026 00:33:47 +0000 (0:00:01.957) 0:07:26.017 ********** 2026-03-02 00:33:58.008291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:33:58.008295 | orchestrator | 2026-03-02 00:33:58.008299 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-02 00:33:58.008304 | orchestrator | Monday 02 March 2026 00:33:48 +0000 (0:00:00.766) 0:07:26.784 ********** 2026-03-02 00:33:58.008308 | orchestrator | changed: [testbed-manager] 2026-03-02 00:33:58.008312 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:33:58.008316 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:33:58.008320 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:33:58.008324 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:33:58.008329 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:33:58.008333 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:33:58.008337 | orchestrator | 2026-03-02 00:33:58.008344 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-02 00:34:29.369396 | orchestrator | Monday 02 March 2026 00:33:58 +0000 (0:00:09.898) 0:07:36.682 ********** 2026-03-02 00:34:29.369477 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:34:29.369484 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:34:29.369489 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:34:29.369493 | orchestrator | ok: [testbed-manager] 2026-03-02 00:34:29.369497 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:34:29.369502 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:34:29.369506 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:34:29.369510 | orchestrator | 2026-03-02 00:34:29.369515 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-02 00:34:29.369519 | orchestrator | Monday 02 March 2026 00:33:59 +0000 (0:00:01.989) 0:07:38.672 ********** 2026-03-02 00:34:29.369524 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:34:29.369527 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:34:29.369531 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:34:29.369535 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:34:29.369539 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:34:29.369543 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:34:29.369547 | orchestrator | 2026-03-02 00:34:29.369551 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-02 00:34:29.369555 | orchestrator | Monday 02 March 2026 00:34:01 +0000 (0:00:01.303) 0:07:39.976 ********** 2026-03-02 00:34:29.369559 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:34:29.369564 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:34:29.369568 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:34:29.369572 | orchestrator | changed: [testbed-manager] 2026-03-02 00:34:29.369576 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:34:29.369579 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:34:29.369583 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:34:29.369587 | orchestrator | 2026-03-02 00:34:29.369591 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-02 00:34:29.369595 | orchestrator | 2026-03-02 00:34:29.369599 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-02 00:34:29.369603 | orchestrator | Monday 02 March 2026 00:34:02 +0000 (0:00:01.378) 0:07:41.354 ********** 2026-03-02 00:34:29.369607 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:34:29.369611 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:34:29.369629 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:34:29.369633 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:34:29.369637 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:34:29.369641 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:34:29.369645 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:34:29.369648 | orchestrator | 2026-03-02 00:34:29.369652 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-02 00:34:29.369656 | orchestrator | 2026-03-02 00:34:29.369660 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-02 00:34:29.369664 | orchestrator | Monday 02 March 2026 00:34:03 +0000 (0:00:00.483) 0:07:41.837 ********** 2026-03-02 00:34:29.369668 | orchestrator | changed: [testbed-manager] 2026-03-02 00:34:29.369671 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:34:29.369675 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:34:29.369679 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:34:29.369684 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:34:29.369687 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:34:29.369691 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:34:29.369695 | orchestrator | 2026-03-02 00:34:29.369699 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-02 00:34:29.369703 | orchestrator | Monday 02 March 2026 00:34:04 +0000 (0:00:01.341) 0:07:43.179 ********** 2026-03-02 00:34:29.369718 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:34:29.369722 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:34:29.369726 | orchestrator | ok: [testbed-manager] 2026-03-02 00:34:29.369772 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:34:29.369776 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:34:29.369780 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:34:29.369784 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:34:29.369788 | orchestrator | 2026-03-02 00:34:29.369792 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-02 00:34:29.369795 | orchestrator | Monday 02 March 2026 00:34:05 +0000 (0:00:01.399) 0:07:44.578 ********** 2026-03-02 00:34:29.369799 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:34:29.369803 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:34:29.369818 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:34:29.369822 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:34:29.369826 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:34:29.369830 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:34:29.369834 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:34:29.369838 | orchestrator | 2026-03-02 00:34:29.369842 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-02 00:34:29.369846 | orchestrator | Monday 02 March 2026 00:34:06 +0000 (0:00:00.629) 0:07:45.208 ********** 2026-03-02 00:34:29.369850 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:34:29.369855 | orchestrator | 2026-03-02 00:34:29.369859 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-02 00:34:29.369863 | orchestrator | Monday 02 March 2026 00:34:07 +0000 (0:00:00.774) 0:07:45.983 ********** 2026-03-02 00:34:29.369868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:34:29.369873 | orchestrator | 2026-03-02 00:34:29.369877 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-02 00:34:29.369881 | orchestrator | Monday 02 March 2026 00:34:08 +0000 (0:00:00.757) 0:07:46.740 ********** 2026-03-02 00:34:29.369885 | orchestrator | changed: [testbed-manager] 2026-03-02 00:34:29.369889 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:34:29.369892 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:34:29.369896 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:34:29.369900 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:34:29.369908 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:34:29.369912 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:34:29.369916 | orchestrator | 2026-03-02 00:34:29.369931 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-02 00:34:29.369935 | orchestrator | Monday 02 March 2026 00:34:17 +0000 (0:00:09.345) 0:07:56.086 ********** 2026-03-02 00:34:29.369939 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:34:29.369943 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:34:29.369946 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:34:29.369950 | orchestrator | changed: [testbed-manager] 2026-03-02 00:34:29.369955 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:34:29.369959 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:34:29.369964 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:34:29.369968 | orchestrator | 2026-03-02 00:34:29.369973 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-02 00:34:29.369978 | orchestrator | Monday 02 March 2026 00:34:18 +0000 (0:00:00.879) 0:07:56.965 ********** 2026-03-02 00:34:29.369983 | orchestrator | changed: [testbed-manager] 2026-03-02 00:34:29.369987 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:34:29.369992 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:34:29.369996 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:34:29.370002 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:34:29.370007 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:34:29.370014 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:34:29.370059 | orchestrator | 2026-03-02 00:34:29.370064 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-02 00:34:29.370069 | orchestrator | Monday 02 March 2026 00:34:19 +0000 (0:00:01.363) 0:07:58.329 ********** 2026-03-02 00:34:29.370073 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:34:29.370078 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:34:29.370082 | orchestrator | changed: [testbed-manager] 2026-03-02 00:34:29.370086 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:34:29.370091 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:34:29.370095 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:34:29.370099 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:34:29.370104 | orchestrator | 2026-03-02 00:34:29.370108 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-02 00:34:29.370113 | orchestrator | Monday 02 March 2026 00:34:21 +0000 (0:00:01.854) 0:08:00.184 ********** 2026-03-02 00:34:29.370118 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:34:29.370122 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:34:29.370126 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:34:29.370131 | orchestrator | changed: [testbed-manager] 2026-03-02 00:34:29.370135 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:34:29.370140 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:34:29.370144 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:34:29.370149 | orchestrator | 2026-03-02 00:34:29.370153 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-02 00:34:29.370158 | orchestrator | Monday 02 March 2026 00:34:22 +0000 (0:00:01.241) 0:08:01.426 ********** 2026-03-02 00:34:29.370162 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:34:29.370167 | orchestrator | changed: [testbed-manager] 2026-03-02 00:34:29.370171 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:34:29.370175 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:34:29.370180 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:34:29.370184 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:34:29.370189 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:34:29.370193 | orchestrator | 2026-03-02 00:34:29.370198 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-02 00:34:29.370202 | orchestrator | 2026-03-02 00:34:29.370207 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-02 00:34:29.370211 | orchestrator | Monday 02 March 2026 00:34:24 +0000 (0:00:01.751) 0:08:03.177 ********** 2026-03-02 00:34:29.370218 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:34:29.370223 | orchestrator | 2026-03-02 00:34:29.370227 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-02 00:34:29.370230 | orchestrator | Monday 02 March 2026 00:34:25 +0000 (0:00:00.907) 0:08:04.085 ********** 2026-03-02 00:34:29.370234 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:34:29.370238 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:34:29.370245 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:34:29.370249 | orchestrator | ok: [testbed-manager] 2026-03-02 00:34:29.370253 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:34:29.370257 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:34:29.370261 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:34:29.370264 | orchestrator | 2026-03-02 00:34:29.370268 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-02 00:34:29.370272 | orchestrator | Monday 02 March 2026 00:34:26 +0000 (0:00:00.876) 0:08:04.962 ********** 2026-03-02 00:34:29.370276 | orchestrator | changed: [testbed-manager] 2026-03-02 00:34:29.370280 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:34:29.370284 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:34:29.370287 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:34:29.370291 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:34:29.370295 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:34:29.370299 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:34:29.370303 | orchestrator | 2026-03-02 00:34:29.370306 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-02 00:34:29.370310 | orchestrator | Monday 02 March 2026 00:34:27 +0000 (0:00:01.254) 0:08:06.217 ********** 2026-03-02 00:34:29.370314 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:34:29.370318 | orchestrator | 2026-03-02 00:34:29.370322 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-02 00:34:29.370326 | orchestrator | Monday 02 March 2026 00:34:28 +0000 (0:00:00.967) 0:08:07.184 ********** 2026-03-02 00:34:29.370330 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:34:29.370334 | orchestrator | ok: [testbed-manager] 2026-03-02 00:34:29.370337 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:34:29.370341 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:34:29.370345 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:34:29.370349 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:34:29.370353 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:34:29.370356 | orchestrator | 2026-03-02 00:34:29.370364 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-02 00:34:30.823533 | orchestrator | Monday 02 March 2026 00:34:29 +0000 (0:00:00.855) 0:08:08.040 ********** 2026-03-02 00:34:30.823630 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:34:30.823644 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:34:30.823652 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:34:30.823661 | orchestrator | changed: [testbed-manager] 2026-03-02 00:34:30.823669 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:34:30.823684 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:34:30.823691 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:34:30.823698 | orchestrator | 2026-03-02 00:34:30.823708 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:34:30.823717 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-02 00:34:30.823750 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-02 00:34:30.823759 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-02 00:34:30.823794 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-02 00:34:30.823803 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-02 00:34:30.823811 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-02 00:34:30.823819 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-02 00:34:30.823828 | orchestrator | 2026-03-02 00:34:30.823869 | orchestrator | 2026-03-02 00:34:30.823878 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:34:30.823886 | orchestrator | Monday 02 March 2026 00:34:30 +0000 (0:00:01.098) 0:08:09.138 ********** 2026-03-02 00:34:30.823894 | orchestrator | =============================================================================== 2026-03-02 00:34:30.823903 | orchestrator | osism.commons.packages : Install required packages --------------------- 85.57s 2026-03-02 00:34:30.823911 | orchestrator | osism.commons.packages : Download required packages -------------------- 33.70s 2026-03-02 00:34:30.823918 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.13s 2026-03-02 00:34:30.823926 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.79s 2026-03-02 00:34:30.823956 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.49s 2026-03-02 00:34:30.823965 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.83s 2026-03-02 00:34:30.823974 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.43s 2026-03-02 00:34:30.823982 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.09s 2026-03-02 00:34:30.823990 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.90s 2026-03-02 00:34:30.823998 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.44s 2026-03-02 00:34:30.824004 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.35s 2026-03-02 00:34:30.824020 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.71s 2026-03-02 00:34:30.824024 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.37s 2026-03-02 00:34:30.824031 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.32s 2026-03-02 00:34:30.824039 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.15s 2026-03-02 00:34:30.824047 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.02s 2026-03-02 00:34:30.824055 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.13s 2026-03-02 00:34:30.824063 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.24s 2026-03-02 00:34:30.824070 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.89s 2026-03-02 00:34:30.824078 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.43s 2026-03-02 00:34:31.123800 | orchestrator | + osism apply fail2ban 2026-03-02 00:34:43.676026 | orchestrator | 2026-03-02 00:34:43 | INFO  | Prepare task for execution of fail2ban. 2026-03-02 00:34:43.744638 | orchestrator | 2026-03-02 00:34:43 | INFO  | Task fb98e5c3-acfa-4300-b02d-e1f7a1c6f977 (fail2ban) was prepared for execution. 2026-03-02 00:34:43.744821 | orchestrator | 2026-03-02 00:34:43 | INFO  | It takes a moment until task fb98e5c3-acfa-4300-b02d-e1f7a1c6f977 (fail2ban) has been started and output is visible here. 2026-03-02 00:35:05.483586 | orchestrator | 2026-03-02 00:35:05.483758 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-02 00:35:05.483831 | orchestrator | 2026-03-02 00:35:05.483847 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-02 00:35:05.483859 | orchestrator | Monday 02 March 2026 00:34:47 +0000 (0:00:00.244) 0:00:00.244 ********** 2026-03-02 00:35:05.483872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:35:05.483886 | orchestrator | 2026-03-02 00:35:05.483897 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-02 00:35:05.483908 | orchestrator | Monday 02 March 2026 00:34:48 +0000 (0:00:00.999) 0:00:01.244 ********** 2026-03-02 00:35:05.483919 | orchestrator | changed: [testbed-manager] 2026-03-02 00:35:05.483931 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:35:05.483942 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:35:05.483953 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:35:05.483964 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:35:05.483975 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:35:05.483986 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:35:05.483997 | orchestrator | 2026-03-02 00:35:05.484008 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-02 00:35:05.484019 | orchestrator | Monday 02 March 2026 00:35:00 +0000 (0:00:12.018) 0:00:13.263 ********** 2026-03-02 00:35:05.484030 | orchestrator | changed: [testbed-manager] 2026-03-02 00:35:05.484041 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:35:05.484052 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:35:05.484063 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:35:05.484074 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:35:05.484085 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:35:05.484096 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:35:05.484107 | orchestrator | 2026-03-02 00:35:05.484118 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-02 00:35:05.484131 | orchestrator | Monday 02 March 2026 00:35:02 +0000 (0:00:01.423) 0:00:14.686 ********** 2026-03-02 00:35:05.484144 | orchestrator | ok: [testbed-manager] 2026-03-02 00:35:05.484159 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:35:05.484172 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:35:05.484185 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:35:05.484197 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:35:05.484210 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:35:05.484223 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:35:05.484236 | orchestrator | 2026-03-02 00:35:05.484249 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-02 00:35:05.484263 | orchestrator | Monday 02 March 2026 00:35:03 +0000 (0:00:01.395) 0:00:16.082 ********** 2026-03-02 00:35:05.484276 | orchestrator | changed: [testbed-manager] 2026-03-02 00:35:05.484290 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:35:05.484303 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:35:05.484316 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:35:05.484329 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:35:05.484342 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:35:05.484356 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:35:05.484369 | orchestrator | 2026-03-02 00:35:05.484383 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:35:05.484397 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:35:05.484410 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:35:05.484421 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:35:05.484433 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:35:05.484468 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:35:05.484480 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:35:05.484491 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:35:05.484502 | orchestrator | 2026-03-02 00:35:05.484513 | orchestrator | 2026-03-02 00:35:05.484524 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:35:05.484535 | orchestrator | Monday 02 March 2026 00:35:05 +0000 (0:00:01.577) 0:00:17.660 ********** 2026-03-02 00:35:05.484546 | orchestrator | =============================================================================== 2026-03-02 00:35:05.484557 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.02s 2026-03-02 00:35:05.484568 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.58s 2026-03-02 00:35:05.484579 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.42s 2026-03-02 00:35:05.484590 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.40s 2026-03-02 00:35:05.484601 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.00s 2026-03-02 00:35:05.788614 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-02 00:35:05.788803 | orchestrator | + osism apply network 2026-03-02 00:35:17.908593 | orchestrator | 2026-03-02 00:35:17 | INFO  | Prepare task for execution of network. 2026-03-02 00:35:17.976970 | orchestrator | 2026-03-02 00:35:17 | INFO  | Task 68172a1f-8bcd-47d8-b5b8-8b5bb954e108 (network) was prepared for execution. 2026-03-02 00:35:17.977053 | orchestrator | 2026-03-02 00:35:17 | INFO  | It takes a moment until task 68172a1f-8bcd-47d8-b5b8-8b5bb954e108 (network) has been started and output is visible here. 2026-03-02 00:35:46.719444 | orchestrator | 2026-03-02 00:35:46.719548 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-02 00:35:46.719563 | orchestrator | 2026-03-02 00:35:46.719574 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-02 00:35:46.719584 | orchestrator | Monday 02 March 2026 00:35:21 +0000 (0:00:00.226) 0:00:00.226 ********** 2026-03-02 00:35:46.719594 | orchestrator | ok: [testbed-manager] 2026-03-02 00:35:46.719604 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:35:46.719651 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:35:46.719666 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:35:46.719683 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:35:46.719698 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:35:46.719709 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:35:46.719718 | orchestrator | 2026-03-02 00:35:46.719727 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-02 00:35:46.719736 | orchestrator | Monday 02 March 2026 00:35:22 +0000 (0:00:00.574) 0:00:00.801 ********** 2026-03-02 00:35:46.719746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:35:46.719757 | orchestrator | 2026-03-02 00:35:46.719766 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-02 00:35:46.719775 | orchestrator | Monday 02 March 2026 00:35:23 +0000 (0:00:01.118) 0:00:01.919 ********** 2026-03-02 00:35:46.719786 | orchestrator | ok: [testbed-manager] 2026-03-02 00:35:46.719800 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:35:46.719814 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:35:46.719827 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:35:46.719841 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:35:46.719885 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:35:46.719899 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:35:46.719913 | orchestrator | 2026-03-02 00:35:46.719927 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-02 00:35:46.719942 | orchestrator | Monday 02 March 2026 00:35:25 +0000 (0:00:02.090) 0:00:04.010 ********** 2026-03-02 00:35:46.719957 | orchestrator | ok: [testbed-manager] 2026-03-02 00:35:46.719973 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:35:46.719984 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:35:46.719994 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:35:46.720004 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:35:46.720014 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:35:46.720024 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:35:46.720034 | orchestrator | 2026-03-02 00:35:46.720044 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-02 00:35:46.720055 | orchestrator | Monday 02 March 2026 00:35:27 +0000 (0:00:01.767) 0:00:05.778 ********** 2026-03-02 00:35:46.720065 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-02 00:35:46.720077 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-02 00:35:46.720087 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-02 00:35:46.720098 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-02 00:35:46.720108 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-02 00:35:46.720118 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-02 00:35:46.720128 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-02 00:35:46.720138 | orchestrator | 2026-03-02 00:35:46.720148 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-02 00:35:46.720160 | orchestrator | Monday 02 March 2026 00:35:28 +0000 (0:00:00.885) 0:00:06.663 ********** 2026-03-02 00:35:46.720196 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-02 00:35:46.720212 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-02 00:35:46.720225 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 00:35:46.720241 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-02 00:35:46.720255 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-02 00:35:46.720272 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-02 00:35:46.720288 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-02 00:35:46.720302 | orchestrator | 2026-03-02 00:35:46.720313 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-02 00:35:46.720323 | orchestrator | Monday 02 March 2026 00:35:31 +0000 (0:00:03.077) 0:00:09.741 ********** 2026-03-02 00:35:46.720335 | orchestrator | changed: [testbed-manager] 2026-03-02 00:35:46.720345 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:35:46.720354 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:35:46.720363 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:35:46.720372 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:35:46.720380 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:35:46.720389 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:35:46.720397 | orchestrator | 2026-03-02 00:35:46.720406 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-02 00:35:46.720415 | orchestrator | Monday 02 March 2026 00:35:33 +0000 (0:00:01.603) 0:00:11.345 ********** 2026-03-02 00:35:46.720424 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-02 00:35:46.720432 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 00:35:46.720441 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-02 00:35:46.720467 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-02 00:35:46.720476 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-02 00:35:46.720484 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-02 00:35:46.720493 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-02 00:35:46.720502 | orchestrator | 2026-03-02 00:35:46.720511 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-02 00:35:46.720520 | orchestrator | Monday 02 March 2026 00:35:34 +0000 (0:00:01.675) 0:00:13.020 ********** 2026-03-02 00:35:46.720540 | orchestrator | ok: [testbed-manager] 2026-03-02 00:35:46.720556 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:35:46.720570 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:35:46.720584 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:35:46.720598 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:35:46.720657 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:35:46.720672 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:35:46.720687 | orchestrator | 2026-03-02 00:35:46.720703 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-02 00:35:46.720739 | orchestrator | Monday 02 March 2026 00:35:35 +0000 (0:00:01.066) 0:00:14.087 ********** 2026-03-02 00:35:46.720749 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:35:46.720758 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:35:46.720767 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:35:46.720776 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:35:46.720784 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:35:46.720793 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:35:46.720802 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:35:46.720810 | orchestrator | 2026-03-02 00:35:46.720819 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-02 00:35:46.720828 | orchestrator | Monday 02 March 2026 00:35:36 +0000 (0:00:00.654) 0:00:14.741 ********** 2026-03-02 00:35:46.720837 | orchestrator | ok: [testbed-manager] 2026-03-02 00:35:46.720846 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:35:46.720854 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:35:46.720863 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:35:46.720871 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:35:46.720892 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:35:46.720901 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:35:46.720909 | orchestrator | 2026-03-02 00:35:46.720918 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-02 00:35:46.720927 | orchestrator | Monday 02 March 2026 00:35:38 +0000 (0:00:02.350) 0:00:17.092 ********** 2026-03-02 00:35:46.720935 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:35:46.720944 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:35:46.720953 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:35:46.720961 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:35:46.720970 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:35:46.720979 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:35:46.720988 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-02 00:35:46.720998 | orchestrator | 2026-03-02 00:35:46.721007 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-02 00:35:46.721017 | orchestrator | Monday 02 March 2026 00:35:39 +0000 (0:00:00.911) 0:00:18.003 ********** 2026-03-02 00:35:46.721026 | orchestrator | ok: [testbed-manager] 2026-03-02 00:35:46.721035 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:35:46.721043 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:35:46.721052 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:35:46.721060 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:35:46.721069 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:35:46.721078 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:35:46.721086 | orchestrator | 2026-03-02 00:35:46.721095 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-02 00:35:46.721104 | orchestrator | Monday 02 March 2026 00:35:41 +0000 (0:00:01.734) 0:00:19.738 ********** 2026-03-02 00:35:46.721113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:35:46.721124 | orchestrator | 2026-03-02 00:35:46.721133 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-02 00:35:46.721142 | orchestrator | Monday 02 March 2026 00:35:42 +0000 (0:00:01.333) 0:00:21.071 ********** 2026-03-02 00:35:46.721158 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:35:46.721167 | orchestrator | ok: [testbed-manager] 2026-03-02 00:35:46.721178 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:35:46.721193 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:35:46.721207 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:35:46.721221 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:35:46.721236 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:35:46.721250 | orchestrator | 2026-03-02 00:35:46.721265 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-02 00:35:46.721280 | orchestrator | Monday 02 March 2026 00:35:44 +0000 (0:00:02.079) 0:00:23.150 ********** 2026-03-02 00:35:46.721296 | orchestrator | ok: [testbed-manager] 2026-03-02 00:35:46.721311 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:35:46.721330 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:35:46.721339 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:35:46.721348 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:35:46.721357 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:35:46.721365 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:35:46.721374 | orchestrator | 2026-03-02 00:35:46.721383 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-02 00:35:46.721391 | orchestrator | Monday 02 March 2026 00:35:45 +0000 (0:00:00.652) 0:00:23.803 ********** 2026-03-02 00:35:46.721400 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-02 00:35:46.721409 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-02 00:35:46.721421 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-02 00:35:46.721434 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-02 00:35:46.721448 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-02 00:35:46.721463 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-02 00:35:46.721477 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-02 00:35:46.721491 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-02 00:35:46.721504 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-02 00:35:46.721519 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-02 00:35:46.721528 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-02 00:35:46.721537 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-02 00:35:46.721545 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-02 00:35:46.721555 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-02 00:35:46.721563 | orchestrator | 2026-03-02 00:35:46.721582 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-02 00:36:02.194509 | orchestrator | Monday 02 March 2026 00:35:46 +0000 (0:00:01.227) 0:00:25.030 ********** 2026-03-02 00:36:02.194685 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:36:02.194710 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:36:02.194725 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:36:02.194738 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:36:02.194750 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:36:02.194761 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:36:02.194773 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:36:02.194785 | orchestrator | 2026-03-02 00:36:02.194798 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-02 00:36:02.194810 | orchestrator | Monday 02 March 2026 00:35:47 +0000 (0:00:00.610) 0:00:25.641 ********** 2026-03-02 00:36:02.194823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-node-3, testbed-manager, testbed-node-5, testbed-node-2, testbed-node-4 2026-03-02 00:36:02.194865 | orchestrator | 2026-03-02 00:36:02.194878 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-02 00:36:02.194889 | orchestrator | Monday 02 March 2026 00:35:51 +0000 (0:00:04.261) 0:00:29.903 ********** 2026-03-02 00:36:02.194903 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.194918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.194931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.194944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.194956 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.194968 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.194997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.195012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.195025 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.195044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.195057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.195087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.195099 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.195123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.195135 | orchestrator | 2026-03-02 00:36:02.195147 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-02 00:36:02.195160 | orchestrator | Monday 02 March 2026 00:35:57 +0000 (0:00:05.531) 0:00:35.434 ********** 2026-03-02 00:36:02.195172 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.195183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.195195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.195207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.195220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.195233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.195246 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.195266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-02 00:36:02.195280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.195293 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.195306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.195319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:02.195352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:15.402232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-02 00:36:15.402367 | orchestrator | 2026-03-02 00:36:15.402388 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-02 00:36:15.402402 | orchestrator | Monday 02 March 2026 00:36:02 +0000 (0:00:05.460) 0:00:40.894 ********** 2026-03-02 00:36:15.402414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:36:15.402427 | orchestrator | 2026-03-02 00:36:15.402439 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-02 00:36:15.402450 | orchestrator | Monday 02 March 2026 00:36:03 +0000 (0:00:01.137) 0:00:42.032 ********** 2026-03-02 00:36:15.402462 | orchestrator | ok: [testbed-manager] 2026-03-02 00:36:15.402475 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:36:15.402486 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:36:15.402498 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:36:15.402509 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:36:15.402520 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:36:15.402531 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:36:15.402542 | orchestrator | 2026-03-02 00:36:15.402553 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-02 00:36:15.402596 | orchestrator | Monday 02 March 2026 00:36:04 +0000 (0:00:01.052) 0:00:43.084 ********** 2026-03-02 00:36:15.402610 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-02 00:36:15.402622 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-02 00:36:15.402634 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-02 00:36:15.402645 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-02 00:36:15.402656 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-02 00:36:15.402668 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-02 00:36:15.402679 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-02 00:36:15.402690 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:36:15.402703 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-02 00:36:15.402714 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-02 00:36:15.402725 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-02 00:36:15.402736 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-02 00:36:15.402747 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-02 00:36:15.402760 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:36:15.402773 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-02 00:36:15.402803 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-02 00:36:15.402816 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-02 00:36:15.402829 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-02 00:36:15.402864 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:36:15.402877 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-02 00:36:15.402890 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-02 00:36:15.402903 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-02 00:36:15.402916 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-02 00:36:15.402929 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:36:15.402942 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-02 00:36:15.402955 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-02 00:36:15.402968 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-02 00:36:15.402980 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-02 00:36:15.402992 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:36:15.403005 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:36:15.403018 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-02 00:36:15.403031 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-02 00:36:15.403044 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-02 00:36:15.403056 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-02 00:36:15.403069 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:36:15.403081 | orchestrator | 2026-03-02 00:36:15.403094 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-02 00:36:15.403126 | orchestrator | Monday 02 March 2026 00:36:05 +0000 (0:00:00.771) 0:00:43.856 ********** 2026-03-02 00:36:15.403138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:36:15.403151 | orchestrator | 2026-03-02 00:36:15.403170 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-02 00:36:15.403188 | orchestrator | Monday 02 March 2026 00:36:06 +0000 (0:00:01.218) 0:00:45.075 ********** 2026-03-02 00:36:15.403206 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:36:15.403226 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:36:15.403246 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:36:15.403264 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:36:15.403280 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:36:15.403291 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:36:15.403302 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:36:15.403312 | orchestrator | 2026-03-02 00:36:15.403324 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-02 00:36:15.403335 | orchestrator | Monday 02 March 2026 00:36:07 +0000 (0:00:00.617) 0:00:45.693 ********** 2026-03-02 00:36:15.403346 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:36:15.403357 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:36:15.403368 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:36:15.403379 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:36:15.403390 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:36:15.403401 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:36:15.403412 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:36:15.403423 | orchestrator | 2026-03-02 00:36:15.403434 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-02 00:36:15.403445 | orchestrator | Monday 02 March 2026 00:36:08 +0000 (0:00:00.819) 0:00:46.512 ********** 2026-03-02 00:36:15.403456 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:36:15.403476 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:36:15.403487 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:36:15.403498 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:36:15.403509 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:36:15.403520 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:36:15.403531 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:36:15.403542 | orchestrator | 2026-03-02 00:36:15.403553 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-02 00:36:15.403564 | orchestrator | Monday 02 March 2026 00:36:08 +0000 (0:00:00.621) 0:00:47.134 ********** 2026-03-02 00:36:15.403614 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:36:15.403634 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:36:15.403652 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:36:15.403670 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:36:15.403685 | orchestrator | ok: [testbed-manager] 2026-03-02 00:36:15.403696 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:36:15.403707 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:36:15.403718 | orchestrator | 2026-03-02 00:36:15.403729 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-02 00:36:15.403740 | orchestrator | Monday 02 March 2026 00:36:10 +0000 (0:00:01.856) 0:00:48.990 ********** 2026-03-02 00:36:15.403751 | orchestrator | ok: [testbed-manager] 2026-03-02 00:36:15.403762 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:36:15.403773 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:36:15.403783 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:36:15.403794 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:36:15.403805 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:36:15.403816 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:36:15.403827 | orchestrator | 2026-03-02 00:36:15.403838 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-02 00:36:15.403856 | orchestrator | Monday 02 March 2026 00:36:11 +0000 (0:00:01.023) 0:00:50.014 ********** 2026-03-02 00:36:15.403868 | orchestrator | ok: [testbed-manager] 2026-03-02 00:36:15.403878 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:36:15.403889 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:36:15.403900 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:36:15.403911 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:36:15.403922 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:36:15.403933 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:36:15.403943 | orchestrator | 2026-03-02 00:36:15.403955 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-02 00:36:15.403966 | orchestrator | Monday 02 March 2026 00:36:13 +0000 (0:00:02.298) 0:00:52.313 ********** 2026-03-02 00:36:15.403977 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:36:15.403988 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:36:15.403999 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:36:15.404010 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:36:15.404021 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:36:15.404032 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:36:15.404042 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:36:15.404053 | orchestrator | 2026-03-02 00:36:15.404064 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-02 00:36:15.404080 | orchestrator | Monday 02 March 2026 00:36:14 +0000 (0:00:00.843) 0:00:53.156 ********** 2026-03-02 00:36:15.404107 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:36:15.404129 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:36:15.404147 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:36:15.404165 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:36:15.404182 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:36:15.404198 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:36:15.404215 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:36:15.404232 | orchestrator | 2026-03-02 00:36:15.404251 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:36:15.404270 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-02 00:36:15.404301 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-02 00:36:15.404333 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-02 00:36:15.861441 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-02 00:36:15.861545 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-02 00:36:15.861560 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-02 00:36:15.861623 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-02 00:36:15.861636 | orchestrator | 2026-03-02 00:36:15.861649 | orchestrator | 2026-03-02 00:36:15.861661 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:36:15.861673 | orchestrator | Monday 02 March 2026 00:36:15 +0000 (0:00:00.556) 0:00:53.713 ********** 2026-03-02 00:36:15.861684 | orchestrator | =============================================================================== 2026-03-02 00:36:15.861695 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.53s 2026-03-02 00:36:15.861706 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.46s 2026-03-02 00:36:15.861717 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.26s 2026-03-02 00:36:15.861728 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.08s 2026-03-02 00:36:15.861739 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.35s 2026-03-02 00:36:15.861750 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.30s 2026-03-02 00:36:15.861761 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.09s 2026-03-02 00:36:15.861772 | orchestrator | osism.commons.network : List existing configuration files --------------- 2.08s 2026-03-02 00:36:15.861782 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.86s 2026-03-02 00:36:15.861793 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.77s 2026-03-02 00:36:15.861804 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.73s 2026-03-02 00:36:15.861815 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.68s 2026-03-02 00:36:15.861826 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.60s 2026-03-02 00:36:15.861836 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.33s 2026-03-02 00:36:15.861847 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.23s 2026-03-02 00:36:15.861858 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.22s 2026-03-02 00:36:15.861869 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.14s 2026-03-02 00:36:15.861880 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.12s 2026-03-02 00:36:15.861891 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.07s 2026-03-02 00:36:15.861902 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.05s 2026-03-02 00:36:16.177670 | orchestrator | + osism apply wireguard 2026-03-02 00:36:28.328713 | orchestrator | 2026-03-02 00:36:28 | INFO  | Prepare task for execution of wireguard. 2026-03-02 00:36:28.400759 | orchestrator | 2026-03-02 00:36:28 | INFO  | Task e8cc444d-3766-4f6d-8471-9c4b9b5725bc (wireguard) was prepared for execution. 2026-03-02 00:36:28.400895 | orchestrator | 2026-03-02 00:36:28 | INFO  | It takes a moment until task e8cc444d-3766-4f6d-8471-9c4b9b5725bc (wireguard) has been started and output is visible here. 2026-03-02 00:36:46.361376 | orchestrator | 2026-03-02 00:36:46.361474 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-02 00:36:46.361487 | orchestrator | 2026-03-02 00:36:46.361497 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-02 00:36:46.361505 | orchestrator | Monday 02 March 2026 00:36:32 +0000 (0:00:00.202) 0:00:00.202 ********** 2026-03-02 00:36:46.361513 | orchestrator | ok: [testbed-manager] 2026-03-02 00:36:46.361521 | orchestrator | 2026-03-02 00:36:46.361529 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-02 00:36:46.361537 | orchestrator | Monday 02 March 2026 00:36:33 +0000 (0:00:01.283) 0:00:01.485 ********** 2026-03-02 00:36:46.361545 | orchestrator | changed: [testbed-manager] 2026-03-02 00:36:46.361554 | orchestrator | 2026-03-02 00:36:46.361561 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-02 00:36:46.361569 | orchestrator | Monday 02 March 2026 00:36:39 +0000 (0:00:05.170) 0:00:06.656 ********** 2026-03-02 00:36:46.361577 | orchestrator | changed: [testbed-manager] 2026-03-02 00:36:46.361584 | orchestrator | 2026-03-02 00:36:46.361592 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-02 00:36:46.361599 | orchestrator | Monday 02 March 2026 00:36:39 +0000 (0:00:00.537) 0:00:07.193 ********** 2026-03-02 00:36:46.361606 | orchestrator | changed: [testbed-manager] 2026-03-02 00:36:46.361614 | orchestrator | 2026-03-02 00:36:46.361621 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-02 00:36:46.361629 | orchestrator | Monday 02 March 2026 00:36:40 +0000 (0:00:00.425) 0:00:07.619 ********** 2026-03-02 00:36:46.361636 | orchestrator | ok: [testbed-manager] 2026-03-02 00:36:46.361689 | orchestrator | 2026-03-02 00:36:46.361699 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-02 00:36:46.361706 | orchestrator | Monday 02 March 2026 00:36:40 +0000 (0:00:00.666) 0:00:08.286 ********** 2026-03-02 00:36:46.361714 | orchestrator | ok: [testbed-manager] 2026-03-02 00:36:46.361721 | orchestrator | 2026-03-02 00:36:46.361728 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-02 00:36:46.361736 | orchestrator | Monday 02 March 2026 00:36:41 +0000 (0:00:00.405) 0:00:08.691 ********** 2026-03-02 00:36:46.361743 | orchestrator | ok: [testbed-manager] 2026-03-02 00:36:46.361750 | orchestrator | 2026-03-02 00:36:46.361759 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-02 00:36:46.361771 | orchestrator | Monday 02 March 2026 00:36:41 +0000 (0:00:00.385) 0:00:09.077 ********** 2026-03-02 00:36:46.361783 | orchestrator | changed: [testbed-manager] 2026-03-02 00:36:46.361795 | orchestrator | 2026-03-02 00:36:46.361807 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-02 00:36:46.361820 | orchestrator | Monday 02 March 2026 00:36:42 +0000 (0:00:01.055) 0:00:10.132 ********** 2026-03-02 00:36:46.361834 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-02 00:36:46.361847 | orchestrator | changed: [testbed-manager] 2026-03-02 00:36:46.361859 | orchestrator | 2026-03-02 00:36:46.361871 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-02 00:36:46.361885 | orchestrator | Monday 02 March 2026 00:36:43 +0000 (0:00:00.852) 0:00:10.985 ********** 2026-03-02 00:36:46.361895 | orchestrator | changed: [testbed-manager] 2026-03-02 00:36:46.361902 | orchestrator | 2026-03-02 00:36:46.361910 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-02 00:36:46.361917 | orchestrator | Monday 02 March 2026 00:36:45 +0000 (0:00:01.644) 0:00:12.629 ********** 2026-03-02 00:36:46.361925 | orchestrator | changed: [testbed-manager] 2026-03-02 00:36:46.361932 | orchestrator | 2026-03-02 00:36:46.361942 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:36:46.361994 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:36:46.362004 | orchestrator | 2026-03-02 00:36:46.362013 | orchestrator | 2026-03-02 00:36:46.362071 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:36:46.362081 | orchestrator | Monday 02 March 2026 00:36:45 +0000 (0:00:00.925) 0:00:13.555 ********** 2026-03-02 00:36:46.362092 | orchestrator | =============================================================================== 2026-03-02 00:36:46.362104 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.17s 2026-03-02 00:36:46.362122 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.64s 2026-03-02 00:36:46.362136 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.28s 2026-03-02 00:36:46.362148 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.06s 2026-03-02 00:36:46.362160 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2026-03-02 00:36:46.362172 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.85s 2026-03-02 00:36:46.362185 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.67s 2026-03-02 00:36:46.362198 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2026-03-02 00:36:46.362212 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-03-02 00:36:46.362226 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2026-03-02 00:36:46.362234 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.39s 2026-03-02 00:36:46.656832 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-02 00:36:46.695260 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-02 00:36:46.695351 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-02 00:36:46.776817 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 175 0 --:--:-- --:--:-- --:--:-- 177 2026-03-02 00:36:46.789149 | orchestrator | + osism apply --environment custom workarounds 2026-03-02 00:36:48.592746 | orchestrator | 2026-03-02 00:36:48 | INFO  | Trying to run play workarounds in environment custom 2026-03-02 00:36:58.636641 | orchestrator | 2026-03-02 00:36:58 | INFO  | Prepare task for execution of workarounds. 2026-03-02 00:36:58.709259 | orchestrator | 2026-03-02 00:36:58 | INFO  | Task 7b002397-d734-4b87-9862-37e4bbc8232e (workarounds) was prepared for execution. 2026-03-02 00:36:58.709337 | orchestrator | 2026-03-02 00:36:58 | INFO  | It takes a moment until task 7b002397-d734-4b87-9862-37e4bbc8232e (workarounds) has been started and output is visible here. 2026-03-02 00:37:23.036978 | orchestrator | 2026-03-02 00:37:23.037087 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:37:23.037103 | orchestrator | 2026-03-02 00:37:23.037113 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-02 00:37:23.037122 | orchestrator | Monday 02 March 2026 00:37:02 +0000 (0:00:00.111) 0:00:00.111 ********** 2026-03-02 00:37:23.037133 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-02 00:37:23.037142 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-02 00:37:23.037151 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-02 00:37:23.037160 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-02 00:37:23.037169 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-02 00:37:23.037177 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-02 00:37:23.037187 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-02 00:37:23.037217 | orchestrator | 2026-03-02 00:37:23.037226 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-02 00:37:23.037235 | orchestrator | 2026-03-02 00:37:23.037244 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-02 00:37:23.037253 | orchestrator | Monday 02 March 2026 00:37:03 +0000 (0:00:00.601) 0:00:00.713 ********** 2026-03-02 00:37:23.037272 | orchestrator | ok: [testbed-manager] 2026-03-02 00:37:23.037282 | orchestrator | 2026-03-02 00:37:23.037291 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-02 00:37:23.037299 | orchestrator | 2026-03-02 00:37:23.037308 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-02 00:37:23.037317 | orchestrator | Monday 02 March 2026 00:37:05 +0000 (0:00:02.145) 0:00:02.859 ********** 2026-03-02 00:37:23.037325 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:37:23.037334 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:37:23.037343 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:37:23.037351 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:37:23.037360 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:37:23.037368 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:37:23.037377 | orchestrator | 2026-03-02 00:37:23.037385 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-02 00:37:23.037394 | orchestrator | 2026-03-02 00:37:23.037402 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-02 00:37:23.037411 | orchestrator | Monday 02 March 2026 00:37:07 +0000 (0:00:01.796) 0:00:04.656 ********** 2026-03-02 00:37:23.037420 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-02 00:37:23.037430 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-02 00:37:23.037439 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-02 00:37:23.037448 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-02 00:37:23.037456 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-02 00:37:23.037465 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-02 00:37:23.037473 | orchestrator | 2026-03-02 00:37:23.037482 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-02 00:37:23.037491 | orchestrator | Monday 02 March 2026 00:37:08 +0000 (0:00:01.426) 0:00:06.082 ********** 2026-03-02 00:37:23.037500 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:37:23.037509 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:37:23.037524 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:37:23.037538 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:37:23.037553 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:37:23.037567 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:37:23.037581 | orchestrator | 2026-03-02 00:37:23.037595 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-02 00:37:23.037610 | orchestrator | Monday 02 March 2026 00:37:12 +0000 (0:00:03.382) 0:00:09.465 ********** 2026-03-02 00:37:23.037625 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:37:23.037639 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:37:23.037664 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:37:23.037674 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:37:23.037685 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:37:23.037696 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:37:23.037706 | orchestrator | 2026-03-02 00:37:23.037717 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-02 00:37:23.037753 | orchestrator | 2026-03-02 00:37:23.037765 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-02 00:37:23.037775 | orchestrator | Monday 02 March 2026 00:37:12 +0000 (0:00:00.693) 0:00:10.158 ********** 2026-03-02 00:37:23.037794 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:37:23.037805 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:37:23.037815 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:37:23.037826 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:37:23.037836 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:37:23.037846 | orchestrator | changed: [testbed-manager] 2026-03-02 00:37:23.037856 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:37:23.037867 | orchestrator | 2026-03-02 00:37:23.037877 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-02 00:37:23.037887 | orchestrator | Monday 02 March 2026 00:37:14 +0000 (0:00:01.566) 0:00:11.725 ********** 2026-03-02 00:37:23.037898 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:37:23.037909 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:37:23.037919 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:37:23.037928 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:37:23.037936 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:37:23.037945 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:37:23.037971 | orchestrator | changed: [testbed-manager] 2026-03-02 00:37:23.037980 | orchestrator | 2026-03-02 00:37:23.037989 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-02 00:37:23.037998 | orchestrator | Monday 02 March 2026 00:37:15 +0000 (0:00:01.532) 0:00:13.258 ********** 2026-03-02 00:37:23.038007 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:37:23.038054 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:37:23.038065 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:37:23.038073 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:37:23.038082 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:37:23.038090 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:37:23.038099 | orchestrator | ok: [testbed-manager] 2026-03-02 00:37:23.038108 | orchestrator | 2026-03-02 00:37:23.038117 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-02 00:37:23.038125 | orchestrator | Monday 02 March 2026 00:37:17 +0000 (0:00:01.492) 0:00:14.750 ********** 2026-03-02 00:37:23.038134 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:37:23.038143 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:37:23.038151 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:37:23.038160 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:37:23.038169 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:37:23.038177 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:37:23.038186 | orchestrator | changed: [testbed-manager] 2026-03-02 00:37:23.038194 | orchestrator | 2026-03-02 00:37:23.038203 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-02 00:37:23.038211 | orchestrator | Monday 02 March 2026 00:37:19 +0000 (0:00:01.789) 0:00:16.540 ********** 2026-03-02 00:37:23.038220 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:37:23.038228 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:37:23.038237 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:37:23.038245 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:37:23.038254 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:37:23.038262 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:37:23.038270 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:37:23.038279 | orchestrator | 2026-03-02 00:37:23.038288 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-02 00:37:23.038297 | orchestrator | 2026-03-02 00:37:23.038305 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-02 00:37:23.038317 | orchestrator | Monday 02 March 2026 00:37:19 +0000 (0:00:00.659) 0:00:17.200 ********** 2026-03-02 00:37:23.038332 | orchestrator | ok: [testbed-manager] 2026-03-02 00:37:23.038346 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:37:23.038360 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:37:23.038375 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:37:23.038389 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:37:23.038405 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:37:23.038430 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:37:23.038445 | orchestrator | 2026-03-02 00:37:23.038460 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:37:23.038473 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:37:23.038483 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:23.038492 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:23.038502 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:23.038510 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:23.038519 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:23.038528 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:23.038537 | orchestrator | 2026-03-02 00:37:23.038546 | orchestrator | 2026-03-02 00:37:23.038561 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:37:23.038583 | orchestrator | Monday 02 March 2026 00:37:23 +0000 (0:00:03.165) 0:00:20.365 ********** 2026-03-02 00:37:23.038598 | orchestrator | =============================================================================== 2026-03-02 00:37:23.038614 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.38s 2026-03-02 00:37:23.038629 | orchestrator | Install python3-docker -------------------------------------------------- 3.17s 2026-03-02 00:37:23.038638 | orchestrator | Apply netplan configuration --------------------------------------------- 2.15s 2026-03-02 00:37:23.038647 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2026-03-02 00:37:23.038656 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.79s 2026-03-02 00:37:23.038664 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.57s 2026-03-02 00:37:23.038673 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.53s 2026-03-02 00:37:23.038682 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2026-03-02 00:37:23.038691 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.43s 2026-03-02 00:37:23.038699 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2026-03-02 00:37:23.038708 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-03-02 00:37:23.038746 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.60s 2026-03-02 00:37:23.606560 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-02 00:37:35.640051 | orchestrator | 2026-03-02 00:37:35 | INFO  | Prepare task for execution of reboot. 2026-03-02 00:37:35.728859 | orchestrator | 2026-03-02 00:37:35 | INFO  | Task 6b7bdc5f-efef-485f-99da-9b37d6b393c1 (reboot) was prepared for execution. 2026-03-02 00:37:35.728954 | orchestrator | 2026-03-02 00:37:35 | INFO  | It takes a moment until task 6b7bdc5f-efef-485f-99da-9b37d6b393c1 (reboot) has been started and output is visible here. 2026-03-02 00:37:46.335066 | orchestrator | 2026-03-02 00:37:46.335182 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-02 00:37:46.335209 | orchestrator | 2026-03-02 00:37:46.335223 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-02 00:37:46.335256 | orchestrator | Monday 02 March 2026 00:37:40 +0000 (0:00:00.199) 0:00:00.199 ********** 2026-03-02 00:37:46.335268 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:37:46.335280 | orchestrator | 2026-03-02 00:37:46.335291 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-02 00:37:46.335302 | orchestrator | Monday 02 March 2026 00:37:40 +0000 (0:00:00.109) 0:00:00.309 ********** 2026-03-02 00:37:46.335313 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:37:46.335324 | orchestrator | 2026-03-02 00:37:46.335342 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-02 00:37:46.335364 | orchestrator | Monday 02 March 2026 00:37:41 +0000 (0:00:01.016) 0:00:01.325 ********** 2026-03-02 00:37:46.335376 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:37:46.335392 | orchestrator | 2026-03-02 00:37:46.335410 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-02 00:37:46.335428 | orchestrator | 2026-03-02 00:37:46.335447 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-02 00:37:46.335465 | orchestrator | Monday 02 March 2026 00:37:41 +0000 (0:00:00.124) 0:00:01.450 ********** 2026-03-02 00:37:46.335478 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:37:46.335491 | orchestrator | 2026-03-02 00:37:46.335514 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-02 00:37:46.335541 | orchestrator | Monday 02 March 2026 00:37:41 +0000 (0:00:00.102) 0:00:01.553 ********** 2026-03-02 00:37:46.335560 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:37:46.335577 | orchestrator | 2026-03-02 00:37:46.335596 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-02 00:37:46.335613 | orchestrator | Monday 02 March 2026 00:37:42 +0000 (0:00:00.675) 0:00:02.228 ********** 2026-03-02 00:37:46.335632 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:37:46.335649 | orchestrator | 2026-03-02 00:37:46.335667 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-02 00:37:46.335686 | orchestrator | 2026-03-02 00:37:46.335704 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-02 00:37:46.335724 | orchestrator | Monday 02 March 2026 00:37:42 +0000 (0:00:00.108) 0:00:02.337 ********** 2026-03-02 00:37:46.335742 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:37:46.335763 | orchestrator | 2026-03-02 00:37:46.335823 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-02 00:37:46.335843 | orchestrator | Monday 02 March 2026 00:37:42 +0000 (0:00:00.225) 0:00:02.563 ********** 2026-03-02 00:37:46.335864 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:37:46.335883 | orchestrator | 2026-03-02 00:37:46.335903 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-02 00:37:46.335920 | orchestrator | Monday 02 March 2026 00:37:43 +0000 (0:00:00.695) 0:00:03.258 ********** 2026-03-02 00:37:46.335932 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:37:46.335945 | orchestrator | 2026-03-02 00:37:46.335958 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-02 00:37:46.335971 | orchestrator | 2026-03-02 00:37:46.335982 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-02 00:37:46.335993 | orchestrator | Monday 02 March 2026 00:37:43 +0000 (0:00:00.119) 0:00:03.378 ********** 2026-03-02 00:37:46.336003 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:37:46.336014 | orchestrator | 2026-03-02 00:37:46.336112 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-02 00:37:46.336126 | orchestrator | Monday 02 March 2026 00:37:43 +0000 (0:00:00.130) 0:00:03.509 ********** 2026-03-02 00:37:46.336151 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:37:46.336163 | orchestrator | 2026-03-02 00:37:46.336174 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-02 00:37:46.336185 | orchestrator | Monday 02 March 2026 00:37:44 +0000 (0:00:00.681) 0:00:04.190 ********** 2026-03-02 00:37:46.336196 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:37:46.336220 | orchestrator | 2026-03-02 00:37:46.336231 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-02 00:37:46.336242 | orchestrator | 2026-03-02 00:37:46.336253 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-02 00:37:46.336267 | orchestrator | Monday 02 March 2026 00:37:44 +0000 (0:00:00.120) 0:00:04.310 ********** 2026-03-02 00:37:46.336288 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:37:46.336305 | orchestrator | 2026-03-02 00:37:46.336316 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-02 00:37:46.336327 | orchestrator | Monday 02 March 2026 00:37:44 +0000 (0:00:00.109) 0:00:04.420 ********** 2026-03-02 00:37:46.336337 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:37:46.336348 | orchestrator | 2026-03-02 00:37:46.336359 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-02 00:37:46.336377 | orchestrator | Monday 02 March 2026 00:37:45 +0000 (0:00:00.757) 0:00:05.178 ********** 2026-03-02 00:37:46.336397 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:37:46.336418 | orchestrator | 2026-03-02 00:37:46.336437 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-02 00:37:46.336448 | orchestrator | 2026-03-02 00:37:46.336459 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-02 00:37:46.336470 | orchestrator | Monday 02 March 2026 00:37:45 +0000 (0:00:00.101) 0:00:05.279 ********** 2026-03-02 00:37:46.336488 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:37:46.336508 | orchestrator | 2026-03-02 00:37:46.336525 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-02 00:37:46.336543 | orchestrator | Monday 02 March 2026 00:37:45 +0000 (0:00:00.106) 0:00:05.385 ********** 2026-03-02 00:37:46.336555 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:37:46.336565 | orchestrator | 2026-03-02 00:37:46.336576 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-02 00:37:46.336638 | orchestrator | Monday 02 March 2026 00:37:45 +0000 (0:00:00.670) 0:00:06.056 ********** 2026-03-02 00:37:46.336672 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:37:46.336685 | orchestrator | 2026-03-02 00:37:46.336696 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:37:46.336708 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:46.336720 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:46.336732 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:46.336743 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:46.336753 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:46.336764 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:37:46.336810 | orchestrator | 2026-03-02 00:37:46.336825 | orchestrator | 2026-03-02 00:37:46.336836 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:37:46.336847 | orchestrator | Monday 02 March 2026 00:37:45 +0000 (0:00:00.043) 0:00:06.100 ********** 2026-03-02 00:37:46.336858 | orchestrator | =============================================================================== 2026-03-02 00:37:46.336869 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.50s 2026-03-02 00:37:46.336880 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.79s 2026-03-02 00:37:46.336891 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2026-03-02 00:37:46.654385 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-02 00:37:58.932008 | orchestrator | 2026-03-02 00:37:58 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-02 00:37:59.014684 | orchestrator | 2026-03-02 00:37:59 | INFO  | Task 18db8895-cfc1-4288-a26d-1227af1e4f91 (wait-for-connection) was prepared for execution. 2026-03-02 00:37:59.014764 | orchestrator | 2026-03-02 00:37:59 | INFO  | It takes a moment until task 18db8895-cfc1-4288-a26d-1227af1e4f91 (wait-for-connection) has been started and output is visible here. 2026-03-02 00:38:14.789618 | orchestrator | 2026-03-02 00:38:14.789799 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-02 00:38:14.789819 | orchestrator | 2026-03-02 00:38:14.789832 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-02 00:38:14.789877 | orchestrator | Monday 02 March 2026 00:38:02 +0000 (0:00:00.170) 0:00:00.170 ********** 2026-03-02 00:38:14.789889 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:38:14.789902 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:38:14.789913 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:38:14.789925 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:38:14.789936 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:38:14.789948 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:38:14.789959 | orchestrator | 2026-03-02 00:38:14.789988 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:38:14.790001 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:38:14.790066 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:38:14.790080 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:38:14.790091 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:38:14.790102 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:38:14.790114 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:38:14.790124 | orchestrator | 2026-03-02 00:38:14.790136 | orchestrator | 2026-03-02 00:38:14.790149 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:38:14.790163 | orchestrator | Monday 02 March 2026 00:38:14 +0000 (0:00:11.539) 0:00:11.710 ********** 2026-03-02 00:38:14.790177 | orchestrator | =============================================================================== 2026-03-02 00:38:14.790190 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2026-03-02 00:38:15.078681 | orchestrator | + osism apply hddtemp 2026-03-02 00:38:27.187497 | orchestrator | 2026-03-02 00:38:27 | INFO  | Prepare task for execution of hddtemp. 2026-03-02 00:38:27.256934 | orchestrator | 2026-03-02 00:38:27 | INFO  | Task 5d387662-5830-4980-a7b3-9d0dcb4668c5 (hddtemp) was prepared for execution. 2026-03-02 00:38:27.257051 | orchestrator | 2026-03-02 00:38:27 | INFO  | It takes a moment until task 5d387662-5830-4980-a7b3-9d0dcb4668c5 (hddtemp) has been started and output is visible here. 2026-03-02 00:38:56.092825 | orchestrator | 2026-03-02 00:38:56.093054 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-02 00:38:56.093085 | orchestrator | 2026-03-02 00:38:56.093105 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-02 00:38:56.093124 | orchestrator | Monday 02 March 2026 00:38:31 +0000 (0:00:00.235) 0:00:00.235 ********** 2026-03-02 00:38:56.093144 | orchestrator | ok: [testbed-manager] 2026-03-02 00:38:56.093195 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:38:56.093214 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:38:56.093227 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:38:56.093238 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:38:56.093249 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:38:56.093260 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:38:56.093271 | orchestrator | 2026-03-02 00:38:56.093282 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-02 00:38:56.093293 | orchestrator | Monday 02 March 2026 00:38:31 +0000 (0:00:00.612) 0:00:00.847 ********** 2026-03-02 00:38:56.093306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:38:56.093319 | orchestrator | 2026-03-02 00:38:56.093330 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-02 00:38:56.093344 | orchestrator | Monday 02 March 2026 00:38:32 +0000 (0:00:01.011) 0:00:01.859 ********** 2026-03-02 00:38:56.093363 | orchestrator | ok: [testbed-manager] 2026-03-02 00:38:56.093381 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:38:56.093400 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:38:56.093418 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:38:56.093435 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:38:56.093454 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:38:56.093472 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:38:56.093491 | orchestrator | 2026-03-02 00:38:56.093509 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-02 00:38:56.093527 | orchestrator | Monday 02 March 2026 00:38:34 +0000 (0:00:02.030) 0:00:03.889 ********** 2026-03-02 00:38:56.093545 | orchestrator | changed: [testbed-manager] 2026-03-02 00:38:56.093564 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:38:56.093584 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:38:56.093602 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:38:56.093620 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:38:56.093637 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:38:56.093655 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:38:56.093672 | orchestrator | 2026-03-02 00:38:56.093690 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-02 00:38:56.093708 | orchestrator | Monday 02 March 2026 00:38:35 +0000 (0:00:01.053) 0:00:04.943 ********** 2026-03-02 00:38:56.093725 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:38:56.093744 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:38:56.093762 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:38:56.093780 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:38:56.093800 | orchestrator | ok: [testbed-manager] 2026-03-02 00:38:56.093818 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:38:56.093834 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:38:56.093846 | orchestrator | 2026-03-02 00:38:56.093857 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-02 00:38:56.093868 | orchestrator | Monday 02 March 2026 00:38:37 +0000 (0:00:02.016) 0:00:06.959 ********** 2026-03-02 00:38:56.093879 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:38:56.093890 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:38:56.093901 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:38:56.093912 | orchestrator | changed: [testbed-manager] 2026-03-02 00:38:56.093973 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:38:56.093985 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:38:56.093996 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:38:56.094006 | orchestrator | 2026-03-02 00:38:56.094078 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-02 00:38:56.094091 | orchestrator | Monday 02 March 2026 00:38:38 +0000 (0:00:00.788) 0:00:07.748 ********** 2026-03-02 00:38:56.094105 | orchestrator | changed: [testbed-manager] 2026-03-02 00:38:56.094120 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:38:56.094147 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:38:56.094160 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:38:56.094173 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:38:56.094186 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:38:56.094199 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:38:56.094212 | orchestrator | 2026-03-02 00:38:56.094224 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-02 00:38:56.094237 | orchestrator | Monday 02 March 2026 00:38:52 +0000 (0:00:13.830) 0:00:21.579 ********** 2026-03-02 00:38:56.094252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:38:56.094266 | orchestrator | 2026-03-02 00:38:56.094280 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-02 00:38:56.094292 | orchestrator | Monday 02 March 2026 00:38:53 +0000 (0:00:01.312) 0:00:22.891 ********** 2026-03-02 00:38:56.094305 | orchestrator | changed: [testbed-manager] 2026-03-02 00:38:56.094318 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:38:56.094332 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:38:56.094350 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:38:56.094370 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:38:56.094392 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:38:56.094412 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:38:56.094431 | orchestrator | 2026-03-02 00:38:56.094454 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:38:56.094473 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:38:56.094511 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:38:56.094523 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:38:56.094534 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:38:56.094545 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:38:56.094556 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:38:56.094567 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:38:56.094578 | orchestrator | 2026-03-02 00:38:56.094589 | orchestrator | 2026-03-02 00:38:56.094600 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:38:56.094611 | orchestrator | Monday 02 March 2026 00:38:55 +0000 (0:00:01.944) 0:00:24.835 ********** 2026-03-02 00:38:56.094622 | orchestrator | =============================================================================== 2026-03-02 00:38:56.094633 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.83s 2026-03-02 00:38:56.094645 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.03s 2026-03-02 00:38:56.094656 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.02s 2026-03-02 00:38:56.094666 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.94s 2026-03-02 00:38:56.094677 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.31s 2026-03-02 00:38:56.094688 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.05s 2026-03-02 00:38:56.094707 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.01s 2026-03-02 00:38:56.094718 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.79s 2026-03-02 00:38:56.094729 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.61s 2026-03-02 00:38:56.385416 | orchestrator | ++ semver latest 7.1.1 2026-03-02 00:38:56.430688 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-02 00:38:56.430791 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-02 00:38:56.430808 | orchestrator | + sudo systemctl restart manager.service 2026-03-02 00:39:34.147576 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-02 00:39:34.147690 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-02 00:39:34.147706 | orchestrator | + local max_attempts=60 2026-03-02 00:39:34.147719 | orchestrator | + local name=ceph-ansible 2026-03-02 00:39:34.147731 | orchestrator | + local attempt_num=1 2026-03-02 00:39:34.147743 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:39:34.176896 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:39:34.177040 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:39:34.177056 | orchestrator | + sleep 5 2026-03-02 00:39:39.182812 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:39:39.211774 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:39:39.211855 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:39:39.211878 | orchestrator | + sleep 5 2026-03-02 00:39:44.215476 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:39:44.241205 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:39:44.241334 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:39:44.241358 | orchestrator | + sleep 5 2026-03-02 00:39:49.244632 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:39:49.282598 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:39:49.282698 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:39:49.282713 | orchestrator | + sleep 5 2026-03-02 00:39:54.287394 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:39:54.329243 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:39:54.329322 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:39:54.329329 | orchestrator | + sleep 5 2026-03-02 00:39:59.332931 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:39:59.363493 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:39:59.363535 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:39:59.363541 | orchestrator | + sleep 5 2026-03-02 00:40:04.367961 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:40:04.405685 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:40:04.406161 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:40:04.406178 | orchestrator | + sleep 5 2026-03-02 00:40:09.412161 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:40:09.452270 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-02 00:40:09.452389 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:40:09.452412 | orchestrator | + sleep 5 2026-03-02 00:40:14.455321 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:40:14.490627 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-02 00:40:14.490760 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:40:14.490786 | orchestrator | + sleep 5 2026-03-02 00:40:19.494305 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:40:19.528551 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-02 00:40:19.528626 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:40:19.528635 | orchestrator | + sleep 5 2026-03-02 00:40:24.532879 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:40:24.572308 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-02 00:40:24.572416 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:40:24.572434 | orchestrator | + sleep 5 2026-03-02 00:40:29.576843 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:40:29.619497 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-02 00:40:29.619602 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:40:29.619650 | orchestrator | + sleep 5 2026-03-02 00:40:34.623537 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:40:34.660707 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-02 00:40:34.660809 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-02 00:40:34.660824 | orchestrator | + sleep 5 2026-03-02 00:40:39.665349 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-02 00:40:39.701397 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:40:39.701478 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-02 00:40:39.701493 | orchestrator | + local max_attempts=60 2026-03-02 00:40:39.701505 | orchestrator | + local name=kolla-ansible 2026-03-02 00:40:39.701517 | orchestrator | + local attempt_num=1 2026-03-02 00:40:39.702159 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-02 00:40:39.733435 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:40:39.733540 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-02 00:40:39.733670 | orchestrator | + local max_attempts=60 2026-03-02 00:40:39.733686 | orchestrator | + local name=osism-ansible 2026-03-02 00:40:39.733698 | orchestrator | + local attempt_num=1 2026-03-02 00:40:39.734397 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-02 00:40:39.766668 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-02 00:40:39.766774 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-02 00:40:39.766791 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-02 00:40:39.937802 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-02 00:40:40.063639 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-02 00:40:40.374367 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-02 00:40:40.374457 | orchestrator | + osism apply gather-facts 2026-03-02 00:40:52.479859 | orchestrator | 2026-03-02 00:40:52 | INFO  | Prepare task for execution of gather-facts. 2026-03-02 00:40:52.544728 | orchestrator | 2026-03-02 00:40:52 | INFO  | Task 48671c73-b052-4e24-b9b5-9741d585a072 (gather-facts) was prepared for execution. 2026-03-02 00:40:52.544842 | orchestrator | 2026-03-02 00:40:52 | INFO  | It takes a moment until task 48671c73-b052-4e24-b9b5-9741d585a072 (gather-facts) has been started and output is visible here. 2026-03-02 00:41:05.255306 | orchestrator | 2026-03-02 00:41:05.255428 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-02 00:41:05.255454 | orchestrator | 2026-03-02 00:41:05.255473 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-02 00:41:05.255491 | orchestrator | Monday 02 March 2026 00:40:56 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-03-02 00:41:05.255510 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:41:05.255529 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:41:05.255550 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:41:05.255562 | orchestrator | ok: [testbed-manager] 2026-03-02 00:41:05.255573 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:41:05.255584 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:41:05.255595 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:41:05.255606 | orchestrator | 2026-03-02 00:41:05.255617 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-02 00:41:05.255627 | orchestrator | 2026-03-02 00:41:05.255639 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-02 00:41:05.255650 | orchestrator | Monday 02 March 2026 00:41:04 +0000 (0:00:08.131) 0:00:08.293 ********** 2026-03-02 00:41:05.255661 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:41:05.255673 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:41:05.255684 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:41:05.255695 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:41:05.255706 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:41:05.255716 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:41:05.255727 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:41:05.255738 | orchestrator | 2026-03-02 00:41:05.255749 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:41:05.255760 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:41:05.255803 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:41:05.255815 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:41:05.255844 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:41:05.255859 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:41:05.255872 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:41:05.255886 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 00:41:05.255900 | orchestrator | 2026-03-02 00:41:05.255913 | orchestrator | 2026-03-02 00:41:05.255926 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:41:05.255939 | orchestrator | Monday 02 March 2026 00:41:05 +0000 (0:00:00.503) 0:00:08.797 ********** 2026-03-02 00:41:05.255951 | orchestrator | =============================================================================== 2026-03-02 00:41:05.255962 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.13s 2026-03-02 00:41:05.255973 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-02 00:41:05.473622 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-02 00:41:05.482619 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-02 00:41:05.490190 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-02 00:41:05.499544 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-02 00:41:05.509506 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-02 00:41:05.522621 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-02 00:41:05.541431 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-02 00:41:05.551408 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-02 00:41:05.574323 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-02 00:41:05.583015 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-02 00:41:05.591593 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-02 00:41:05.600228 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-02 00:41:05.607941 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-02 00:41:05.619577 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-02 00:41:05.629380 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-02 00:41:05.636740 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-02 00:41:05.644221 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-02 00:41:05.651456 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-02 00:41:05.658899 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-02 00:41:05.666801 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-02 00:41:05.674942 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-02 00:41:05.682660 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-02 00:41:05.691398 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-02 00:41:05.699953 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-02 00:41:05.968879 | orchestrator | ok: Runtime: 0:24:09.550296 2026-03-02 00:41:06.076724 | 2026-03-02 00:41:06.076902 | TASK [Deploy services] 2026-03-02 00:41:06.612143 | orchestrator | skipping: Conditional result was False 2026-03-02 00:41:06.629686 | 2026-03-02 00:41:06.629848 | TASK [Deploy in a nutshell] 2026-03-02 00:41:07.332506 | orchestrator | 2026-03-02 00:41:07.332657 | orchestrator | # PULL IMAGES 2026-03-02 00:41:07.332670 | orchestrator | 2026-03-02 00:41:07.332678 | orchestrator | + set -e 2026-03-02 00:41:07.332687 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-02 00:41:07.332699 | orchestrator | ++ export INTERACTIVE=false 2026-03-02 00:41:07.332708 | orchestrator | ++ INTERACTIVE=false 2026-03-02 00:41:07.332736 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-02 00:41:07.332749 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-02 00:41:07.332757 | orchestrator | + source /opt/manager-vars.sh 2026-03-02 00:41:07.332763 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-02 00:41:07.332773 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-02 00:41:07.332781 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-02 00:41:07.332791 | orchestrator | ++ CEPH_VERSION=reef 2026-03-02 00:41:07.332798 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-02 00:41:07.332808 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-02 00:41:07.332814 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-02 00:41:07.332823 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-02 00:41:07.332829 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-02 00:41:07.332837 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-02 00:41:07.332843 | orchestrator | ++ export ARA=false 2026-03-02 00:41:07.332848 | orchestrator | ++ ARA=false 2026-03-02 00:41:07.332854 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-02 00:41:07.332860 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-02 00:41:07.332867 | orchestrator | ++ export TEMPEST=true 2026-03-02 00:41:07.332875 | orchestrator | ++ TEMPEST=true 2026-03-02 00:41:07.332880 | orchestrator | ++ export IS_ZUUL=true 2026-03-02 00:41:07.332886 | orchestrator | ++ IS_ZUUL=true 2026-03-02 00:41:07.332891 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.58 2026-03-02 00:41:07.332897 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.58 2026-03-02 00:41:07.332903 | orchestrator | ++ export EXTERNAL_API=false 2026-03-02 00:41:07.332909 | orchestrator | ++ EXTERNAL_API=false 2026-03-02 00:41:07.332915 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-02 00:41:07.332920 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-02 00:41:07.332926 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-02 00:41:07.332931 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-02 00:41:07.332937 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-02 00:41:07.332942 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-02 00:41:07.332947 | orchestrator | + echo 2026-03-02 00:41:07.332953 | orchestrator | + echo '# PULL IMAGES' 2026-03-02 00:41:07.332958 | orchestrator | + echo 2026-03-02 00:41:07.333128 | orchestrator | ++ semver latest 7.0.0 2026-03-02 00:41:07.384319 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-02 00:41:07.384406 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-02 00:41:07.384413 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-02 00:41:09.093084 | orchestrator | 2026-03-02 00:41:09 | INFO  | Trying to run play pull-images in environment custom 2026-03-02 00:41:19.112892 | orchestrator | 2026-03-02 00:41:19 | INFO  | Prepare task for execution of pull-images. 2026-03-02 00:41:19.179332 | orchestrator | 2026-03-02 00:41:19 | INFO  | Task bf2df77e-a796-4965-91c6-29279224b44c (pull-images) was prepared for execution. 2026-03-02 00:41:19.179435 | orchestrator | 2026-03-02 00:41:19 | INFO  | Task bf2df77e-a796-4965-91c6-29279224b44c is running in background. No more output. Check ARA for logs. 2026-03-02 00:41:21.394258 | orchestrator | 2026-03-02 00:41:21 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-02 00:41:31.469970 | orchestrator | 2026-03-02 00:41:31 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-02 00:41:31.534848 | orchestrator | 2026-03-02 00:41:31 | INFO  | Task 61326752-3f01-410a-b286-cf4e1d25f915 (wipe-partitions) was prepared for execution. 2026-03-02 00:41:31.534942 | orchestrator | 2026-03-02 00:41:31 | INFO  | It takes a moment until task 61326752-3f01-410a-b286-cf4e1d25f915 (wipe-partitions) has been started and output is visible here. 2026-03-02 00:41:43.553766 | orchestrator | 2026-03-02 00:41:43.553872 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-02 00:41:43.553886 | orchestrator | 2026-03-02 00:41:43.553895 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-02 00:41:43.553908 | orchestrator | Monday 02 March 2026 00:41:35 +0000 (0:00:00.127) 0:00:00.127 ********** 2026-03-02 00:41:43.553941 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:41:43.553952 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:41:43.553960 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:41:43.553968 | orchestrator | 2026-03-02 00:41:43.553975 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-02 00:41:43.553983 | orchestrator | Monday 02 March 2026 00:41:35 +0000 (0:00:00.555) 0:00:00.683 ********** 2026-03-02 00:41:43.553995 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:41:43.554003 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:41:43.554010 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:41:43.554073 | orchestrator | 2026-03-02 00:41:43.554082 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-02 00:41:43.554091 | orchestrator | Monday 02 March 2026 00:41:36 +0000 (0:00:00.318) 0:00:01.002 ********** 2026-03-02 00:41:43.554101 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:41:43.554110 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:41:43.554118 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:41:43.554166 | orchestrator | 2026-03-02 00:41:43.554175 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-02 00:41:43.554183 | orchestrator | Monday 02 March 2026 00:41:36 +0000 (0:00:00.537) 0:00:01.539 ********** 2026-03-02 00:41:43.554191 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:41:43.554199 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:41:43.554207 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:41:43.554215 | orchestrator | 2026-03-02 00:41:43.554222 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-02 00:41:43.554231 | orchestrator | Monday 02 March 2026 00:41:37 +0000 (0:00:00.218) 0:00:01.758 ********** 2026-03-02 00:41:43.554238 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-02 00:41:43.554250 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-02 00:41:43.554258 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-02 00:41:43.554266 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-02 00:41:43.554274 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-02 00:41:43.554282 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-02 00:41:43.554290 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-02 00:41:43.554297 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-02 00:41:43.554304 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-02 00:41:43.554313 | orchestrator | 2026-03-02 00:41:43.554321 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-02 00:41:43.554329 | orchestrator | Monday 02 March 2026 00:41:38 +0000 (0:00:01.287) 0:00:03.045 ********** 2026-03-02 00:41:43.554338 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-02 00:41:43.554347 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-02 00:41:43.554355 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-02 00:41:43.554363 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-02 00:41:43.554372 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-02 00:41:43.554381 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-02 00:41:43.554389 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-02 00:41:43.554397 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-02 00:41:43.554406 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-02 00:41:43.554414 | orchestrator | 2026-03-02 00:41:43.554422 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-02 00:41:43.554431 | orchestrator | Monday 02 March 2026 00:41:39 +0000 (0:00:01.502) 0:00:04.548 ********** 2026-03-02 00:41:43.554440 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-02 00:41:43.554449 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-02 00:41:43.554457 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-02 00:41:43.554471 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-02 00:41:43.554487 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-02 00:41:43.554496 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-02 00:41:43.554504 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-02 00:41:43.554513 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-02 00:41:43.554521 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-02 00:41:43.554529 | orchestrator | 2026-03-02 00:41:43.554538 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-02 00:41:43.554547 | orchestrator | Monday 02 March 2026 00:41:41 +0000 (0:00:02.141) 0:00:06.689 ********** 2026-03-02 00:41:43.554555 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:41:43.554563 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:41:43.554571 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:41:43.554579 | orchestrator | 2026-03-02 00:41:43.554588 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-02 00:41:43.554596 | orchestrator | Monday 02 March 2026 00:41:42 +0000 (0:00:00.587) 0:00:07.277 ********** 2026-03-02 00:41:43.554604 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:41:43.554612 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:41:43.554621 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:41:43.554630 | orchestrator | 2026-03-02 00:41:43.554638 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:41:43.554648 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:41:43.554657 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:41:43.554681 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:41:43.554689 | orchestrator | 2026-03-02 00:41:43.554698 | orchestrator | 2026-03-02 00:41:43.554706 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:41:43.554714 | orchestrator | Monday 02 March 2026 00:41:43 +0000 (0:00:00.615) 0:00:07.892 ********** 2026-03-02 00:41:43.554723 | orchestrator | =============================================================================== 2026-03-02 00:41:43.554731 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.14s 2026-03-02 00:41:43.554739 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.50s 2026-03-02 00:41:43.554750 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2026-03-02 00:41:43.554759 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2026-03-02 00:41:43.554767 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2026-03-02 00:41:43.554776 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.56s 2026-03-02 00:41:43.554785 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-03-02 00:41:43.554793 | orchestrator | Remove all rook related logical devices --------------------------------- 0.32s 2026-03-02 00:41:43.554802 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-03-02 00:41:55.851246 | orchestrator | 2026-03-02 00:41:55 | INFO  | Prepare task for execution of facts. 2026-03-02 00:41:55.917682 | orchestrator | 2026-03-02 00:41:55 | INFO  | Task 887e2e12-27c5-4521-addb-a25661fe024f (facts) was prepared for execution. 2026-03-02 00:41:55.917761 | orchestrator | 2026-03-02 00:41:55 | INFO  | It takes a moment until task 887e2e12-27c5-4521-addb-a25661fe024f (facts) has been started and output is visible here. 2026-03-02 00:42:07.813311 | orchestrator | 2026-03-02 00:42:07.813449 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-02 00:42:07.813470 | orchestrator | 2026-03-02 00:42:07.813527 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-02 00:42:07.813550 | orchestrator | Monday 02 March 2026 00:42:00 +0000 (0:00:00.262) 0:00:00.262 ********** 2026-03-02 00:42:07.813580 | orchestrator | ok: [testbed-manager] 2026-03-02 00:42:07.813601 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:42:07.813618 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:42:07.813636 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:42:07.813652 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:42:07.813670 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:42:07.813688 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:42:07.813706 | orchestrator | 2026-03-02 00:42:07.813749 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-02 00:42:07.813770 | orchestrator | Monday 02 March 2026 00:42:01 +0000 (0:00:01.114) 0:00:01.376 ********** 2026-03-02 00:42:07.813790 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:42:07.813805 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:42:07.813819 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:42:07.813831 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:42:07.813844 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:07.813857 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:07.813870 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:07.813883 | orchestrator | 2026-03-02 00:42:07.813900 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-02 00:42:07.813919 | orchestrator | 2026-03-02 00:42:07.813936 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-02 00:42:07.813955 | orchestrator | Monday 02 March 2026 00:42:02 +0000 (0:00:01.193) 0:00:02.570 ********** 2026-03-02 00:42:07.813973 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:42:07.813991 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:42:07.814010 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:42:07.814108 | orchestrator | ok: [testbed-manager] 2026-03-02 00:42:07.814129 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:42:07.814145 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:42:07.814192 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:42:07.814204 | orchestrator | 2026-03-02 00:42:07.814219 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-02 00:42:07.814237 | orchestrator | 2026-03-02 00:42:07.814255 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-02 00:42:07.814275 | orchestrator | Monday 02 March 2026 00:42:07 +0000 (0:00:04.795) 0:00:07.365 ********** 2026-03-02 00:42:07.814295 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:42:07.814314 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:42:07.814333 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:42:07.814351 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:42:07.814368 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:07.814385 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:07.814404 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:07.814424 | orchestrator | 2026-03-02 00:42:07.814444 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:42:07.814463 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:42:07.814482 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:42:07.814493 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:42:07.814504 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:42:07.814515 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:42:07.814543 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:42:07.814554 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:42:07.814564 | orchestrator | 2026-03-02 00:42:07.814575 | orchestrator | 2026-03-02 00:42:07.814586 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:42:07.814597 | orchestrator | Monday 02 March 2026 00:42:07 +0000 (0:00:00.394) 0:00:07.759 ********** 2026-03-02 00:42:07.814608 | orchestrator | =============================================================================== 2026-03-02 00:42:07.814619 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.80s 2026-03-02 00:42:07.814629 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.19s 2026-03-02 00:42:07.814640 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2026-03-02 00:42:07.814651 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.39s 2026-03-02 00:42:09.804955 | orchestrator | 2026-03-02 00:42:09 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-02 00:42:09.859902 | orchestrator | 2026-03-02 00:42:09 | INFO  | Task fce923bd-deb6-4423-8eb2-0443561f914d (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-02 00:42:09.859999 | orchestrator | 2026-03-02 00:42:09 | INFO  | It takes a moment until task fce923bd-deb6-4423-8eb2-0443561f914d (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-02 00:42:20.027747 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-02 00:42:20.027853 | orchestrator | 2.16.14 2026-03-02 00:42:20.027867 | orchestrator | 2026-03-02 00:42:20.027883 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-02 00:42:20.027893 | orchestrator | 2026-03-02 00:42:20.027901 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-02 00:42:20.027909 | orchestrator | Monday 02 March 2026 00:42:13 +0000 (0:00:00.279) 0:00:00.279 ********** 2026-03-02 00:42:20.027918 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-02 00:42:20.027926 | orchestrator | 2026-03-02 00:42:20.027934 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-02 00:42:20.027942 | orchestrator | Monday 02 March 2026 00:42:14 +0000 (0:00:00.224) 0:00:00.504 ********** 2026-03-02 00:42:20.027950 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:42:20.027958 | orchestrator | 2026-03-02 00:42:20.027966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.027974 | orchestrator | Monday 02 March 2026 00:42:14 +0000 (0:00:00.199) 0:00:00.703 ********** 2026-03-02 00:42:20.027982 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-02 00:42:20.027989 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-02 00:42:20.027997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-02 00:42:20.028005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-02 00:42:20.028012 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-02 00:42:20.028020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-02 00:42:20.028028 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-02 00:42:20.028035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-02 00:42:20.028043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-02 00:42:20.028051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-02 00:42:20.028080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-02 00:42:20.028089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-02 00:42:20.028096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-02 00:42:20.028104 | orchestrator | 2026-03-02 00:42:20.028112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028119 | orchestrator | Monday 02 March 2026 00:42:14 +0000 (0:00:00.373) 0:00:01.077 ********** 2026-03-02 00:42:20.028127 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028135 | orchestrator | 2026-03-02 00:42:20.028143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028151 | orchestrator | Monday 02 March 2026 00:42:14 +0000 (0:00:00.173) 0:00:01.250 ********** 2026-03-02 00:42:20.028158 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028218 | orchestrator | 2026-03-02 00:42:20.028227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028238 | orchestrator | Monday 02 March 2026 00:42:14 +0000 (0:00:00.169) 0:00:01.419 ********** 2026-03-02 00:42:20.028246 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028254 | orchestrator | 2026-03-02 00:42:20.028262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028269 | orchestrator | Monday 02 March 2026 00:42:15 +0000 (0:00:00.181) 0:00:01.601 ********** 2026-03-02 00:42:20.028278 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028287 | orchestrator | 2026-03-02 00:42:20.028297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028306 | orchestrator | Monday 02 March 2026 00:42:15 +0000 (0:00:00.161) 0:00:01.762 ********** 2026-03-02 00:42:20.028315 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028324 | orchestrator | 2026-03-02 00:42:20.028333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028342 | orchestrator | Monday 02 March 2026 00:42:15 +0000 (0:00:00.196) 0:00:01.959 ********** 2026-03-02 00:42:20.028352 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028361 | orchestrator | 2026-03-02 00:42:20.028371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028380 | orchestrator | Monday 02 March 2026 00:42:15 +0000 (0:00:00.184) 0:00:02.143 ********** 2026-03-02 00:42:20.028389 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028399 | orchestrator | 2026-03-02 00:42:20.028408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028418 | orchestrator | Monday 02 March 2026 00:42:15 +0000 (0:00:00.188) 0:00:02.332 ********** 2026-03-02 00:42:20.028427 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028436 | orchestrator | 2026-03-02 00:42:20.028445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028455 | orchestrator | Monday 02 March 2026 00:42:16 +0000 (0:00:00.205) 0:00:02.538 ********** 2026-03-02 00:42:20.028464 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8) 2026-03-02 00:42:20.028476 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8) 2026-03-02 00:42:20.028485 | orchestrator | 2026-03-02 00:42:20.028495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028518 | orchestrator | Monday 02 March 2026 00:42:16 +0000 (0:00:00.366) 0:00:02.904 ********** 2026-03-02 00:42:20.028528 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7) 2026-03-02 00:42:20.028538 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7) 2026-03-02 00:42:20.028547 | orchestrator | 2026-03-02 00:42:20.028556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028573 | orchestrator | Monday 02 March 2026 00:42:16 +0000 (0:00:00.505) 0:00:03.409 ********** 2026-03-02 00:42:20.028582 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311) 2026-03-02 00:42:20.028591 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311) 2026-03-02 00:42:20.028600 | orchestrator | 2026-03-02 00:42:20.028609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028618 | orchestrator | Monday 02 March 2026 00:42:17 +0000 (0:00:00.520) 0:00:03.930 ********** 2026-03-02 00:42:20.028628 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f) 2026-03-02 00:42:20.028638 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f) 2026-03-02 00:42:20.028648 | orchestrator | 2026-03-02 00:42:20.028657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:20.028666 | orchestrator | Monday 02 March 2026 00:42:18 +0000 (0:00:00.645) 0:00:04.576 ********** 2026-03-02 00:42:20.028676 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-02 00:42:20.028685 | orchestrator | 2026-03-02 00:42:20.028692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:20.028700 | orchestrator | Monday 02 March 2026 00:42:18 +0000 (0:00:00.314) 0:00:04.890 ********** 2026-03-02 00:42:20.028719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-02 00:42:20.028727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-02 00:42:20.028735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-02 00:42:20.028742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-02 00:42:20.028750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-02 00:42:20.028758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-02 00:42:20.028765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-02 00:42:20.028773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-02 00:42:20.028781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-02 00:42:20.028789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-02 00:42:20.028796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-02 00:42:20.028804 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-02 00:42:20.028812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-02 00:42:20.028820 | orchestrator | 2026-03-02 00:42:20.028827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:20.028835 | orchestrator | Monday 02 March 2026 00:42:18 +0000 (0:00:00.350) 0:00:05.241 ********** 2026-03-02 00:42:20.028843 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028851 | orchestrator | 2026-03-02 00:42:20.028859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:20.028866 | orchestrator | Monday 02 March 2026 00:42:18 +0000 (0:00:00.178) 0:00:05.420 ********** 2026-03-02 00:42:20.028874 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028882 | orchestrator | 2026-03-02 00:42:20.028890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:20.028897 | orchestrator | Monday 02 March 2026 00:42:19 +0000 (0:00:00.182) 0:00:05.602 ********** 2026-03-02 00:42:20.028905 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028918 | orchestrator | 2026-03-02 00:42:20.028926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:20.028934 | orchestrator | Monday 02 March 2026 00:42:19 +0000 (0:00:00.177) 0:00:05.780 ********** 2026-03-02 00:42:20.028942 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028949 | orchestrator | 2026-03-02 00:42:20.028957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:20.028965 | orchestrator | Monday 02 March 2026 00:42:19 +0000 (0:00:00.190) 0:00:05.970 ********** 2026-03-02 00:42:20.028972 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.028980 | orchestrator | 2026-03-02 00:42:20.028992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:20.029000 | orchestrator | Monday 02 March 2026 00:42:19 +0000 (0:00:00.173) 0:00:06.144 ********** 2026-03-02 00:42:20.029007 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.029015 | orchestrator | 2026-03-02 00:42:20.029023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:20.029031 | orchestrator | Monday 02 March 2026 00:42:19 +0000 (0:00:00.172) 0:00:06.317 ********** 2026-03-02 00:42:20.029038 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:20.029046 | orchestrator | 2026-03-02 00:42:20.029058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:26.719576 | orchestrator | Monday 02 March 2026 00:42:20 +0000 (0:00:00.167) 0:00:06.484 ********** 2026-03-02 00:42:26.719691 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.719705 | orchestrator | 2026-03-02 00:42:26.719715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:26.719723 | orchestrator | Monday 02 March 2026 00:42:20 +0000 (0:00:00.185) 0:00:06.669 ********** 2026-03-02 00:42:26.719731 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-02 00:42:26.719740 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-02 00:42:26.719749 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-02 00:42:26.719757 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-02 00:42:26.719775 | orchestrator | 2026-03-02 00:42:26.719783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:26.719791 | orchestrator | Monday 02 March 2026 00:42:21 +0000 (0:00:00.999) 0:00:07.669 ********** 2026-03-02 00:42:26.719799 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.719807 | orchestrator | 2026-03-02 00:42:26.719815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:26.719823 | orchestrator | Monday 02 March 2026 00:42:21 +0000 (0:00:00.190) 0:00:07.859 ********** 2026-03-02 00:42:26.719831 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.719839 | orchestrator | 2026-03-02 00:42:26.719847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:26.719854 | orchestrator | Monday 02 March 2026 00:42:21 +0000 (0:00:00.193) 0:00:08.053 ********** 2026-03-02 00:42:26.719862 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.719870 | orchestrator | 2026-03-02 00:42:26.719878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:26.719886 | orchestrator | Monday 02 March 2026 00:42:21 +0000 (0:00:00.182) 0:00:08.235 ********** 2026-03-02 00:42:26.719894 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.719901 | orchestrator | 2026-03-02 00:42:26.719909 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-02 00:42:26.719917 | orchestrator | Monday 02 March 2026 00:42:21 +0000 (0:00:00.193) 0:00:08.429 ********** 2026-03-02 00:42:26.719925 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-02 00:42:26.719933 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-02 00:42:26.719941 | orchestrator | 2026-03-02 00:42:26.719949 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-02 00:42:26.719957 | orchestrator | Monday 02 March 2026 00:42:22 +0000 (0:00:00.160) 0:00:08.590 ********** 2026-03-02 00:42:26.719987 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.719996 | orchestrator | 2026-03-02 00:42:26.720004 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-02 00:42:26.720012 | orchestrator | Monday 02 March 2026 00:42:22 +0000 (0:00:00.138) 0:00:08.728 ********** 2026-03-02 00:42:26.720019 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.720027 | orchestrator | 2026-03-02 00:42:26.720037 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-02 00:42:26.720045 | orchestrator | Monday 02 March 2026 00:42:22 +0000 (0:00:00.108) 0:00:08.837 ********** 2026-03-02 00:42:26.720053 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.720060 | orchestrator | 2026-03-02 00:42:26.720068 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-02 00:42:26.720076 | orchestrator | Monday 02 March 2026 00:42:22 +0000 (0:00:00.102) 0:00:08.939 ********** 2026-03-02 00:42:26.720084 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:42:26.720092 | orchestrator | 2026-03-02 00:42:26.720100 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-02 00:42:26.720108 | orchestrator | Monday 02 March 2026 00:42:22 +0000 (0:00:00.118) 0:00:09.058 ********** 2026-03-02 00:42:26.720116 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '271875e3-8908-5e0e-b413-64afee9519da'}}) 2026-03-02 00:42:26.720127 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52125f52-6af3-5290-9fed-9584660c39a2'}}) 2026-03-02 00:42:26.720136 | orchestrator | 2026-03-02 00:42:26.720145 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-02 00:42:26.720154 | orchestrator | Monday 02 March 2026 00:42:22 +0000 (0:00:00.155) 0:00:09.213 ********** 2026-03-02 00:42:26.720165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '271875e3-8908-5e0e-b413-64afee9519da'}})  2026-03-02 00:42:26.720215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52125f52-6af3-5290-9fed-9584660c39a2'}})  2026-03-02 00:42:26.720225 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.720235 | orchestrator | 2026-03-02 00:42:26.720244 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-02 00:42:26.720254 | orchestrator | Monday 02 March 2026 00:42:22 +0000 (0:00:00.123) 0:00:09.336 ********** 2026-03-02 00:42:26.720263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '271875e3-8908-5e0e-b413-64afee9519da'}})  2026-03-02 00:42:26.720273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52125f52-6af3-5290-9fed-9584660c39a2'}})  2026-03-02 00:42:26.720282 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.720291 | orchestrator | 2026-03-02 00:42:26.720300 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-02 00:42:26.720309 | orchestrator | Monday 02 March 2026 00:42:23 +0000 (0:00:00.251) 0:00:09.588 ********** 2026-03-02 00:42:26.720319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '271875e3-8908-5e0e-b413-64afee9519da'}})  2026-03-02 00:42:26.720342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52125f52-6af3-5290-9fed-9584660c39a2'}})  2026-03-02 00:42:26.720351 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.720388 | orchestrator | 2026-03-02 00:42:26.720398 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-02 00:42:26.720407 | orchestrator | Monday 02 March 2026 00:42:23 +0000 (0:00:00.140) 0:00:09.728 ********** 2026-03-02 00:42:26.720416 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:42:26.720426 | orchestrator | 2026-03-02 00:42:26.720435 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-02 00:42:26.720444 | orchestrator | Monday 02 March 2026 00:42:23 +0000 (0:00:00.133) 0:00:09.862 ********** 2026-03-02 00:42:26.720452 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:42:26.720469 | orchestrator | 2026-03-02 00:42:26.720478 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-02 00:42:26.720487 | orchestrator | Monday 02 March 2026 00:42:23 +0000 (0:00:00.107) 0:00:09.970 ********** 2026-03-02 00:42:26.720495 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.720503 | orchestrator | 2026-03-02 00:42:26.720520 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-02 00:42:26.720528 | orchestrator | Monday 02 March 2026 00:42:23 +0000 (0:00:00.121) 0:00:10.091 ********** 2026-03-02 00:42:26.720536 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.720544 | orchestrator | 2026-03-02 00:42:26.720552 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-02 00:42:26.720559 | orchestrator | Monday 02 March 2026 00:42:23 +0000 (0:00:00.119) 0:00:10.210 ********** 2026-03-02 00:42:26.720567 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.720575 | orchestrator | 2026-03-02 00:42:26.720582 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-02 00:42:26.720590 | orchestrator | Monday 02 March 2026 00:42:23 +0000 (0:00:00.124) 0:00:10.335 ********** 2026-03-02 00:42:26.720598 | orchestrator | ok: [testbed-node-3] => { 2026-03-02 00:42:26.720606 | orchestrator |  "ceph_osd_devices": { 2026-03-02 00:42:26.720614 | orchestrator |  "sdb": { 2026-03-02 00:42:26.720621 | orchestrator |  "osd_lvm_uuid": "271875e3-8908-5e0e-b413-64afee9519da" 2026-03-02 00:42:26.720629 | orchestrator |  }, 2026-03-02 00:42:26.720637 | orchestrator |  "sdc": { 2026-03-02 00:42:26.720645 | orchestrator |  "osd_lvm_uuid": "52125f52-6af3-5290-9fed-9584660c39a2" 2026-03-02 00:42:26.720653 | orchestrator |  } 2026-03-02 00:42:26.720660 | orchestrator |  } 2026-03-02 00:42:26.720668 | orchestrator | } 2026-03-02 00:42:26.720676 | orchestrator | 2026-03-02 00:42:26.720684 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-02 00:42:26.720692 | orchestrator | Monday 02 March 2026 00:42:24 +0000 (0:00:00.124) 0:00:10.460 ********** 2026-03-02 00:42:26.720699 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.720707 | orchestrator | 2026-03-02 00:42:26.720715 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-02 00:42:26.720722 | orchestrator | Monday 02 March 2026 00:42:24 +0000 (0:00:00.111) 0:00:10.571 ********** 2026-03-02 00:42:26.720730 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.720738 | orchestrator | 2026-03-02 00:42:26.720745 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-02 00:42:26.720753 | orchestrator | Monday 02 March 2026 00:42:24 +0000 (0:00:00.123) 0:00:10.694 ********** 2026-03-02 00:42:26.720761 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:42:26.720768 | orchestrator | 2026-03-02 00:42:26.720776 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-02 00:42:26.720784 | orchestrator | Monday 02 March 2026 00:42:24 +0000 (0:00:00.107) 0:00:10.802 ********** 2026-03-02 00:42:26.720792 | orchestrator | changed: [testbed-node-3] => { 2026-03-02 00:42:26.720799 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-02 00:42:26.720807 | orchestrator |  "ceph_osd_devices": { 2026-03-02 00:42:26.720815 | orchestrator |  "sdb": { 2026-03-02 00:42:26.720823 | orchestrator |  "osd_lvm_uuid": "271875e3-8908-5e0e-b413-64afee9519da" 2026-03-02 00:42:26.720831 | orchestrator |  }, 2026-03-02 00:42:26.720839 | orchestrator |  "sdc": { 2026-03-02 00:42:26.720847 | orchestrator |  "osd_lvm_uuid": "52125f52-6af3-5290-9fed-9584660c39a2" 2026-03-02 00:42:26.720855 | orchestrator |  } 2026-03-02 00:42:26.720862 | orchestrator |  }, 2026-03-02 00:42:26.720870 | orchestrator |  "lvm_volumes": [ 2026-03-02 00:42:26.720878 | orchestrator |  { 2026-03-02 00:42:26.720885 | orchestrator |  "data": "osd-block-271875e3-8908-5e0e-b413-64afee9519da", 2026-03-02 00:42:26.720893 | orchestrator |  "data_vg": "ceph-271875e3-8908-5e0e-b413-64afee9519da" 2026-03-02 00:42:26.720906 | orchestrator |  }, 2026-03-02 00:42:26.720914 | orchestrator |  { 2026-03-02 00:42:26.720922 | orchestrator |  "data": "osd-block-52125f52-6af3-5290-9fed-9584660c39a2", 2026-03-02 00:42:26.720930 | orchestrator |  "data_vg": "ceph-52125f52-6af3-5290-9fed-9584660c39a2" 2026-03-02 00:42:26.720937 | orchestrator |  } 2026-03-02 00:42:26.720945 | orchestrator |  ] 2026-03-02 00:42:26.720953 | orchestrator |  } 2026-03-02 00:42:26.720961 | orchestrator | } 2026-03-02 00:42:26.720968 | orchestrator | 2026-03-02 00:42:26.720976 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-02 00:42:26.720984 | orchestrator | Monday 02 March 2026 00:42:24 +0000 (0:00:00.297) 0:00:11.099 ********** 2026-03-02 00:42:26.720992 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-02 00:42:26.720999 | orchestrator | 2026-03-02 00:42:26.721007 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-02 00:42:26.721015 | orchestrator | 2026-03-02 00:42:26.721023 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-02 00:42:26.721031 | orchestrator | Monday 02 March 2026 00:42:26 +0000 (0:00:01.643) 0:00:12.743 ********** 2026-03-02 00:42:26.721038 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-02 00:42:26.721046 | orchestrator | 2026-03-02 00:42:26.721058 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-02 00:42:26.721066 | orchestrator | Monday 02 March 2026 00:42:26 +0000 (0:00:00.214) 0:00:12.958 ********** 2026-03-02 00:42:26.721074 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:42:26.721082 | orchestrator | 2026-03-02 00:42:26.721095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.711939 | orchestrator | Monday 02 March 2026 00:42:26 +0000 (0:00:00.217) 0:00:13.175 ********** 2026-03-02 00:42:33.712019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-02 00:42:33.712029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-02 00:42:33.712036 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-02 00:42:33.712043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-02 00:42:33.712049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-02 00:42:33.712055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-02 00:42:33.712061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-02 00:42:33.712071 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-02 00:42:33.712077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-02 00:42:33.712083 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-02 00:42:33.712089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-02 00:42:33.712095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-02 00:42:33.712102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-02 00:42:33.712108 | orchestrator | 2026-03-02 00:42:33.712115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712121 | orchestrator | Monday 02 March 2026 00:42:27 +0000 (0:00:00.359) 0:00:13.535 ********** 2026-03-02 00:42:33.712127 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712134 | orchestrator | 2026-03-02 00:42:33.712141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712147 | orchestrator | Monday 02 March 2026 00:42:27 +0000 (0:00:00.186) 0:00:13.721 ********** 2026-03-02 00:42:33.712173 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712201 | orchestrator | 2026-03-02 00:42:33.712208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712214 | orchestrator | Monday 02 March 2026 00:42:27 +0000 (0:00:00.184) 0:00:13.906 ********** 2026-03-02 00:42:33.712220 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712226 | orchestrator | 2026-03-02 00:42:33.712232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712239 | orchestrator | Monday 02 March 2026 00:42:27 +0000 (0:00:00.177) 0:00:14.083 ********** 2026-03-02 00:42:33.712245 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712251 | orchestrator | 2026-03-02 00:42:33.712257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712263 | orchestrator | Monday 02 March 2026 00:42:27 +0000 (0:00:00.195) 0:00:14.278 ********** 2026-03-02 00:42:33.712269 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712275 | orchestrator | 2026-03-02 00:42:33.712281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712287 | orchestrator | Monday 02 March 2026 00:42:28 +0000 (0:00:00.520) 0:00:14.799 ********** 2026-03-02 00:42:33.712293 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712299 | orchestrator | 2026-03-02 00:42:33.712305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712311 | orchestrator | Monday 02 March 2026 00:42:28 +0000 (0:00:00.186) 0:00:14.985 ********** 2026-03-02 00:42:33.712317 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712323 | orchestrator | 2026-03-02 00:42:33.712329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712335 | orchestrator | Monday 02 March 2026 00:42:28 +0000 (0:00:00.194) 0:00:15.180 ********** 2026-03-02 00:42:33.712341 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712347 | orchestrator | 2026-03-02 00:42:33.712353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712360 | orchestrator | Monday 02 March 2026 00:42:28 +0000 (0:00:00.202) 0:00:15.382 ********** 2026-03-02 00:42:33.712366 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba) 2026-03-02 00:42:33.712373 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba) 2026-03-02 00:42:33.712379 | orchestrator | 2026-03-02 00:42:33.712398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712404 | orchestrator | Monday 02 March 2026 00:42:29 +0000 (0:00:00.378) 0:00:15.760 ********** 2026-03-02 00:42:33.712411 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116) 2026-03-02 00:42:33.712417 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116) 2026-03-02 00:42:33.712423 | orchestrator | 2026-03-02 00:42:33.712429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712435 | orchestrator | Monday 02 March 2026 00:42:29 +0000 (0:00:00.410) 0:00:16.171 ********** 2026-03-02 00:42:33.712441 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077) 2026-03-02 00:42:33.712447 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077) 2026-03-02 00:42:33.712453 | orchestrator | 2026-03-02 00:42:33.712460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712478 | orchestrator | Monday 02 March 2026 00:42:30 +0000 (0:00:00.396) 0:00:16.567 ********** 2026-03-02 00:42:33.712485 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842) 2026-03-02 00:42:33.712492 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842) 2026-03-02 00:42:33.712499 | orchestrator | 2026-03-02 00:42:33.712512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:33.712519 | orchestrator | Monday 02 March 2026 00:42:30 +0000 (0:00:00.413) 0:00:16.980 ********** 2026-03-02 00:42:33.712526 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-02 00:42:33.712533 | orchestrator | 2026-03-02 00:42:33.712541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:33.712547 | orchestrator | Monday 02 March 2026 00:42:30 +0000 (0:00:00.312) 0:00:17.293 ********** 2026-03-02 00:42:33.712555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-02 00:42:33.712562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-02 00:42:33.712570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-02 00:42:33.712576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-02 00:42:33.712584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-02 00:42:33.712591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-02 00:42:33.712598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-02 00:42:33.712604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-02 00:42:33.712611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-02 00:42:33.712618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-02 00:42:33.712625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-02 00:42:33.712632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-02 00:42:33.712639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-02 00:42:33.712646 | orchestrator | 2026-03-02 00:42:33.712653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:33.712660 | orchestrator | Monday 02 March 2026 00:42:31 +0000 (0:00:00.329) 0:00:17.622 ********** 2026-03-02 00:42:33.712668 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712675 | orchestrator | 2026-03-02 00:42:33.712682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:33.712689 | orchestrator | Monday 02 March 2026 00:42:31 +0000 (0:00:00.482) 0:00:18.105 ********** 2026-03-02 00:42:33.712696 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712704 | orchestrator | 2026-03-02 00:42:33.712711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:33.712718 | orchestrator | Monday 02 March 2026 00:42:31 +0000 (0:00:00.172) 0:00:18.277 ********** 2026-03-02 00:42:33.712726 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712733 | orchestrator | 2026-03-02 00:42:33.712741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:33.712748 | orchestrator | Monday 02 March 2026 00:42:31 +0000 (0:00:00.180) 0:00:18.458 ********** 2026-03-02 00:42:33.712755 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712763 | orchestrator | 2026-03-02 00:42:33.712770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:33.712777 | orchestrator | Monday 02 March 2026 00:42:32 +0000 (0:00:00.183) 0:00:18.642 ********** 2026-03-02 00:42:33.712785 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712791 | orchestrator | 2026-03-02 00:42:33.712799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:33.712805 | orchestrator | Monday 02 March 2026 00:42:32 +0000 (0:00:00.177) 0:00:18.819 ********** 2026-03-02 00:42:33.712813 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712825 | orchestrator | 2026-03-02 00:42:33.712836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:33.712844 | orchestrator | Monday 02 March 2026 00:42:32 +0000 (0:00:00.184) 0:00:19.004 ********** 2026-03-02 00:42:33.712851 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712858 | orchestrator | 2026-03-02 00:42:33.712864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:33.712870 | orchestrator | Monday 02 March 2026 00:42:32 +0000 (0:00:00.176) 0:00:19.180 ********** 2026-03-02 00:42:33.712876 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:33.712882 | orchestrator | 2026-03-02 00:42:33.712888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:33.712895 | orchestrator | Monday 02 March 2026 00:42:32 +0000 (0:00:00.161) 0:00:19.341 ********** 2026-03-02 00:42:33.712901 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-02 00:42:33.712907 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-02 00:42:33.712914 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-02 00:42:33.712920 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-02 00:42:33.712926 | orchestrator | 2026-03-02 00:42:33.712932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:33.712938 | orchestrator | Monday 02 March 2026 00:42:33 +0000 (0:00:00.731) 0:00:20.073 ********** 2026-03-02 00:42:33.712945 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969117 | orchestrator | 2026-03-02 00:42:38.969261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:38.969273 | orchestrator | Monday 02 March 2026 00:42:33 +0000 (0:00:00.172) 0:00:20.245 ********** 2026-03-02 00:42:38.969281 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969288 | orchestrator | 2026-03-02 00:42:38.969295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:38.969302 | orchestrator | Monday 02 March 2026 00:42:33 +0000 (0:00:00.161) 0:00:20.407 ********** 2026-03-02 00:42:38.969308 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969314 | orchestrator | 2026-03-02 00:42:38.969320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:38.969327 | orchestrator | Monday 02 March 2026 00:42:34 +0000 (0:00:00.159) 0:00:20.566 ********** 2026-03-02 00:42:38.969333 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969339 | orchestrator | 2026-03-02 00:42:38.969346 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-02 00:42:38.969352 | orchestrator | Monday 02 March 2026 00:42:34 +0000 (0:00:00.455) 0:00:21.022 ********** 2026-03-02 00:42:38.969358 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-02 00:42:38.969364 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-02 00:42:38.969371 | orchestrator | 2026-03-02 00:42:38.969377 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-02 00:42:38.969384 | orchestrator | Monday 02 March 2026 00:42:34 +0000 (0:00:00.149) 0:00:21.171 ********** 2026-03-02 00:42:38.969390 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969396 | orchestrator | 2026-03-02 00:42:38.969403 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-02 00:42:38.969409 | orchestrator | Monday 02 March 2026 00:42:34 +0000 (0:00:00.125) 0:00:21.296 ********** 2026-03-02 00:42:38.969415 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969421 | orchestrator | 2026-03-02 00:42:38.969427 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-02 00:42:38.969434 | orchestrator | Monday 02 March 2026 00:42:34 +0000 (0:00:00.113) 0:00:21.410 ********** 2026-03-02 00:42:38.969440 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969446 | orchestrator | 2026-03-02 00:42:38.969453 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-02 00:42:38.969459 | orchestrator | Monday 02 March 2026 00:42:35 +0000 (0:00:00.119) 0:00:21.529 ********** 2026-03-02 00:42:38.969492 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:42:38.969499 | orchestrator | 2026-03-02 00:42:38.969505 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-02 00:42:38.969511 | orchestrator | Monday 02 March 2026 00:42:35 +0000 (0:00:00.116) 0:00:21.646 ********** 2026-03-02 00:42:38.969518 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de3a51bd-019b-527a-8dea-ff4c94e5d801'}}) 2026-03-02 00:42:38.969525 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a84d633-ba5b-5049-b6da-2482ee8b3083'}}) 2026-03-02 00:42:38.969531 | orchestrator | 2026-03-02 00:42:38.969537 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-02 00:42:38.969543 | orchestrator | Monday 02 March 2026 00:42:35 +0000 (0:00:00.131) 0:00:21.777 ********** 2026-03-02 00:42:38.969551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de3a51bd-019b-527a-8dea-ff4c94e5d801'}})  2026-03-02 00:42:38.969559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a84d633-ba5b-5049-b6da-2482ee8b3083'}})  2026-03-02 00:42:38.969566 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969572 | orchestrator | 2026-03-02 00:42:38.969578 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-02 00:42:38.969584 | orchestrator | Monday 02 March 2026 00:42:35 +0000 (0:00:00.147) 0:00:21.925 ********** 2026-03-02 00:42:38.969590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de3a51bd-019b-527a-8dea-ff4c94e5d801'}})  2026-03-02 00:42:38.969596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a84d633-ba5b-5049-b6da-2482ee8b3083'}})  2026-03-02 00:42:38.969604 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969610 | orchestrator | 2026-03-02 00:42:38.969616 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-02 00:42:38.969622 | orchestrator | Monday 02 March 2026 00:42:35 +0000 (0:00:00.170) 0:00:22.096 ********** 2026-03-02 00:42:38.969628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de3a51bd-019b-527a-8dea-ff4c94e5d801'}})  2026-03-02 00:42:38.969636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a84d633-ba5b-5049-b6da-2482ee8b3083'}})  2026-03-02 00:42:38.969643 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969650 | orchestrator | 2026-03-02 00:42:38.969687 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-02 00:42:38.969695 | orchestrator | Monday 02 March 2026 00:42:35 +0000 (0:00:00.175) 0:00:22.271 ********** 2026-03-02 00:42:38.969702 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:42:38.969709 | orchestrator | 2026-03-02 00:42:38.969716 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-02 00:42:38.969724 | orchestrator | Monday 02 March 2026 00:42:35 +0000 (0:00:00.111) 0:00:22.382 ********** 2026-03-02 00:42:38.969731 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:42:38.969738 | orchestrator | 2026-03-02 00:42:38.969745 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-02 00:42:38.969752 | orchestrator | Monday 02 March 2026 00:42:36 +0000 (0:00:00.108) 0:00:22.491 ********** 2026-03-02 00:42:38.969775 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969783 | orchestrator | 2026-03-02 00:42:38.969791 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-02 00:42:38.969798 | orchestrator | Monday 02 March 2026 00:42:36 +0000 (0:00:00.220) 0:00:22.711 ********** 2026-03-02 00:42:38.969805 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969812 | orchestrator | 2026-03-02 00:42:38.969819 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-02 00:42:38.969826 | orchestrator | Monday 02 March 2026 00:42:36 +0000 (0:00:00.093) 0:00:22.805 ********** 2026-03-02 00:42:38.969833 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969847 | orchestrator | 2026-03-02 00:42:38.969854 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-02 00:42:38.969861 | orchestrator | Monday 02 March 2026 00:42:36 +0000 (0:00:00.132) 0:00:22.938 ********** 2026-03-02 00:42:38.969869 | orchestrator | ok: [testbed-node-4] => { 2026-03-02 00:42:38.969876 | orchestrator |  "ceph_osd_devices": { 2026-03-02 00:42:38.969883 | orchestrator |  "sdb": { 2026-03-02 00:42:38.969891 | orchestrator |  "osd_lvm_uuid": "de3a51bd-019b-527a-8dea-ff4c94e5d801" 2026-03-02 00:42:38.969899 | orchestrator |  }, 2026-03-02 00:42:38.969906 | orchestrator |  "sdc": { 2026-03-02 00:42:38.969913 | orchestrator |  "osd_lvm_uuid": "8a84d633-ba5b-5049-b6da-2482ee8b3083" 2026-03-02 00:42:38.969920 | orchestrator |  } 2026-03-02 00:42:38.969927 | orchestrator |  } 2026-03-02 00:42:38.969935 | orchestrator | } 2026-03-02 00:42:38.969942 | orchestrator | 2026-03-02 00:42:38.969949 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-02 00:42:38.969957 | orchestrator | Monday 02 March 2026 00:42:36 +0000 (0:00:00.110) 0:00:23.048 ********** 2026-03-02 00:42:38.969964 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969971 | orchestrator | 2026-03-02 00:42:38.969978 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-02 00:42:38.969985 | orchestrator | Monday 02 March 2026 00:42:36 +0000 (0:00:00.096) 0:00:23.145 ********** 2026-03-02 00:42:38.969992 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.969999 | orchestrator | 2026-03-02 00:42:38.970007 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-02 00:42:38.970078 | orchestrator | Monday 02 March 2026 00:42:36 +0000 (0:00:00.110) 0:00:23.256 ********** 2026-03-02 00:42:38.970086 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:42:38.970093 | orchestrator | 2026-03-02 00:42:38.970099 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-02 00:42:38.970105 | orchestrator | Monday 02 March 2026 00:42:36 +0000 (0:00:00.099) 0:00:23.355 ********** 2026-03-02 00:42:38.970111 | orchestrator | changed: [testbed-node-4] => { 2026-03-02 00:42:38.970118 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-02 00:42:38.970124 | orchestrator |  "ceph_osd_devices": { 2026-03-02 00:42:38.970130 | orchestrator |  "sdb": { 2026-03-02 00:42:38.970136 | orchestrator |  "osd_lvm_uuid": "de3a51bd-019b-527a-8dea-ff4c94e5d801" 2026-03-02 00:42:38.970143 | orchestrator |  }, 2026-03-02 00:42:38.970149 | orchestrator |  "sdc": { 2026-03-02 00:42:38.970155 | orchestrator |  "osd_lvm_uuid": "8a84d633-ba5b-5049-b6da-2482ee8b3083" 2026-03-02 00:42:38.970161 | orchestrator |  } 2026-03-02 00:42:38.970167 | orchestrator |  }, 2026-03-02 00:42:38.970173 | orchestrator |  "lvm_volumes": [ 2026-03-02 00:42:38.970196 | orchestrator |  { 2026-03-02 00:42:38.970203 | orchestrator |  "data": "osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801", 2026-03-02 00:42:38.970209 | orchestrator |  "data_vg": "ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801" 2026-03-02 00:42:38.970215 | orchestrator |  }, 2026-03-02 00:42:38.970221 | orchestrator |  { 2026-03-02 00:42:38.970228 | orchestrator |  "data": "osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083", 2026-03-02 00:42:38.970234 | orchestrator |  "data_vg": "ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083" 2026-03-02 00:42:38.970240 | orchestrator |  } 2026-03-02 00:42:38.970246 | orchestrator |  ] 2026-03-02 00:42:38.970252 | orchestrator |  } 2026-03-02 00:42:38.970258 | orchestrator | } 2026-03-02 00:42:38.970264 | orchestrator | 2026-03-02 00:42:38.970270 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-02 00:42:38.970276 | orchestrator | Monday 02 March 2026 00:42:37 +0000 (0:00:00.176) 0:00:23.532 ********** 2026-03-02 00:42:38.970283 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-02 00:42:38.970289 | orchestrator | 2026-03-02 00:42:38.970301 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-02 00:42:38.970307 | orchestrator | 2026-03-02 00:42:38.970314 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-02 00:42:38.970320 | orchestrator | Monday 02 March 2026 00:42:38 +0000 (0:00:00.929) 0:00:24.462 ********** 2026-03-02 00:42:38.970326 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-02 00:42:38.970332 | orchestrator | 2026-03-02 00:42:38.970338 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-02 00:42:38.970345 | orchestrator | Monday 02 March 2026 00:42:38 +0000 (0:00:00.535) 0:00:24.997 ********** 2026-03-02 00:42:38.970351 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:42:38.970357 | orchestrator | 2026-03-02 00:42:38.970363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:38.970369 | orchestrator | Monday 02 March 2026 00:42:38 +0000 (0:00:00.201) 0:00:25.199 ********** 2026-03-02 00:42:38.970375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-02 00:42:38.970381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-02 00:42:38.970387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-02 00:42:38.970393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-02 00:42:38.970400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-02 00:42:38.970410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-02 00:42:45.588826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-02 00:42:45.589574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-02 00:42:45.589629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-02 00:42:45.589643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-02 00:42:45.589671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-02 00:42:45.589683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-02 00:42:45.589694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-02 00:42:45.589705 | orchestrator | 2026-03-02 00:42:45.589717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.589729 | orchestrator | Monday 02 March 2026 00:42:39 +0000 (0:00:00.304) 0:00:25.503 ********** 2026-03-02 00:42:45.589741 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.589753 | orchestrator | 2026-03-02 00:42:45.589762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.589772 | orchestrator | Monday 02 March 2026 00:42:39 +0000 (0:00:00.176) 0:00:25.680 ********** 2026-03-02 00:42:45.589781 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.589791 | orchestrator | 2026-03-02 00:42:45.589800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.589810 | orchestrator | Monday 02 March 2026 00:42:39 +0000 (0:00:00.202) 0:00:25.882 ********** 2026-03-02 00:42:45.589819 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.589828 | orchestrator | 2026-03-02 00:42:45.589838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.589847 | orchestrator | Monday 02 March 2026 00:42:39 +0000 (0:00:00.169) 0:00:26.052 ********** 2026-03-02 00:42:45.589861 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.589871 | orchestrator | 2026-03-02 00:42:45.589880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.589889 | orchestrator | Monday 02 March 2026 00:42:39 +0000 (0:00:00.155) 0:00:26.208 ********** 2026-03-02 00:42:45.589920 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.589930 | orchestrator | 2026-03-02 00:42:45.589940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.589956 | orchestrator | Monday 02 March 2026 00:42:39 +0000 (0:00:00.148) 0:00:26.357 ********** 2026-03-02 00:42:45.589973 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.589989 | orchestrator | 2026-03-02 00:42:45.590005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.590085 | orchestrator | Monday 02 March 2026 00:42:40 +0000 (0:00:00.146) 0:00:26.503 ********** 2026-03-02 00:42:45.590100 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.590109 | orchestrator | 2026-03-02 00:42:45.590120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.590129 | orchestrator | Monday 02 March 2026 00:42:40 +0000 (0:00:00.145) 0:00:26.648 ********** 2026-03-02 00:42:45.590139 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.590148 | orchestrator | 2026-03-02 00:42:45.590158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.590167 | orchestrator | Monday 02 March 2026 00:42:40 +0000 (0:00:00.140) 0:00:26.789 ********** 2026-03-02 00:42:45.590176 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8) 2026-03-02 00:42:45.590226 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8) 2026-03-02 00:42:45.590238 | orchestrator | 2026-03-02 00:42:45.590248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.590257 | orchestrator | Monday 02 March 2026 00:42:40 +0000 (0:00:00.601) 0:00:27.391 ********** 2026-03-02 00:42:45.590267 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351) 2026-03-02 00:42:45.590276 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351) 2026-03-02 00:42:45.590286 | orchestrator | 2026-03-02 00:42:45.590295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.590305 | orchestrator | Monday 02 March 2026 00:42:41 +0000 (0:00:00.340) 0:00:27.731 ********** 2026-03-02 00:42:45.590314 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506) 2026-03-02 00:42:45.590324 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506) 2026-03-02 00:42:45.590333 | orchestrator | 2026-03-02 00:42:45.590342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.590352 | orchestrator | Monday 02 March 2026 00:42:41 +0000 (0:00:00.317) 0:00:28.049 ********** 2026-03-02 00:42:45.590361 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb) 2026-03-02 00:42:45.590370 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb) 2026-03-02 00:42:45.590380 | orchestrator | 2026-03-02 00:42:45.590389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:42:45.590398 | orchestrator | Monday 02 March 2026 00:42:41 +0000 (0:00:00.311) 0:00:28.360 ********** 2026-03-02 00:42:45.590408 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-02 00:42:45.590417 | orchestrator | 2026-03-02 00:42:45.590426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.590454 | orchestrator | Monday 02 March 2026 00:42:42 +0000 (0:00:00.302) 0:00:28.663 ********** 2026-03-02 00:42:45.590464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-02 00:42:45.590473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-02 00:42:45.590484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-02 00:42:45.590493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-02 00:42:45.590577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-02 00:42:45.590587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-02 00:42:45.590596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-02 00:42:45.590606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-02 00:42:45.590615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-02 00:42:45.590632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-02 00:42:45.590648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-02 00:42:45.590663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-02 00:42:45.590677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-02 00:42:45.590693 | orchestrator | 2026-03-02 00:42:45.590708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.590724 | orchestrator | Monday 02 March 2026 00:42:42 +0000 (0:00:00.338) 0:00:29.002 ********** 2026-03-02 00:42:45.590740 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.590755 | orchestrator | 2026-03-02 00:42:45.590772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.590788 | orchestrator | Monday 02 March 2026 00:42:42 +0000 (0:00:00.150) 0:00:29.153 ********** 2026-03-02 00:42:45.590805 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.590823 | orchestrator | 2026-03-02 00:42:45.590840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.590859 | orchestrator | Monday 02 March 2026 00:42:42 +0000 (0:00:00.185) 0:00:29.338 ********** 2026-03-02 00:42:45.590870 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.590879 | orchestrator | 2026-03-02 00:42:45.590889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.590913 | orchestrator | Monday 02 March 2026 00:42:43 +0000 (0:00:00.183) 0:00:29.522 ********** 2026-03-02 00:42:45.590923 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.590933 | orchestrator | 2026-03-02 00:42:45.590942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.590952 | orchestrator | Monday 02 March 2026 00:42:43 +0000 (0:00:00.182) 0:00:29.704 ********** 2026-03-02 00:42:45.590961 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.590970 | orchestrator | 2026-03-02 00:42:45.590987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.591005 | orchestrator | Monday 02 March 2026 00:42:43 +0000 (0:00:00.161) 0:00:29.866 ********** 2026-03-02 00:42:45.591016 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.591025 | orchestrator | 2026-03-02 00:42:45.591035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.591044 | orchestrator | Monday 02 March 2026 00:42:43 +0000 (0:00:00.472) 0:00:30.338 ********** 2026-03-02 00:42:45.591054 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.591063 | orchestrator | 2026-03-02 00:42:45.591073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.591083 | orchestrator | Monday 02 March 2026 00:42:44 +0000 (0:00:00.198) 0:00:30.536 ********** 2026-03-02 00:42:45.591098 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.591108 | orchestrator | 2026-03-02 00:42:45.591117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.591127 | orchestrator | Monday 02 March 2026 00:42:44 +0000 (0:00:00.188) 0:00:30.725 ********** 2026-03-02 00:42:45.591136 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-02 00:42:45.591154 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-02 00:42:45.591163 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-02 00:42:45.591173 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-02 00:42:45.591182 | orchestrator | 2026-03-02 00:42:45.591209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.591219 | orchestrator | Monday 02 March 2026 00:42:44 +0000 (0:00:00.603) 0:00:31.328 ********** 2026-03-02 00:42:45.591229 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.591238 | orchestrator | 2026-03-02 00:42:45.591248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.591257 | orchestrator | Monday 02 March 2026 00:42:45 +0000 (0:00:00.180) 0:00:31.508 ********** 2026-03-02 00:42:45.591267 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.591276 | orchestrator | 2026-03-02 00:42:45.591285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.591295 | orchestrator | Monday 02 March 2026 00:42:45 +0000 (0:00:00.199) 0:00:31.708 ********** 2026-03-02 00:42:45.591304 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.591314 | orchestrator | 2026-03-02 00:42:45.591323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:42:45.591333 | orchestrator | Monday 02 March 2026 00:42:45 +0000 (0:00:00.180) 0:00:31.889 ********** 2026-03-02 00:42:45.591342 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:45.591351 | orchestrator | 2026-03-02 00:42:45.591370 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-02 00:42:49.257754 | orchestrator | Monday 02 March 2026 00:42:45 +0000 (0:00:00.155) 0:00:32.044 ********** 2026-03-02 00:42:49.257847 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-02 00:42:49.257860 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-02 00:42:49.257871 | orchestrator | 2026-03-02 00:42:49.257878 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-02 00:42:49.257883 | orchestrator | Monday 02 March 2026 00:42:45 +0000 (0:00:00.167) 0:00:32.211 ********** 2026-03-02 00:42:49.257889 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.257895 | orchestrator | 2026-03-02 00:42:49.257900 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-02 00:42:49.257905 | orchestrator | Monday 02 March 2026 00:42:45 +0000 (0:00:00.125) 0:00:32.337 ********** 2026-03-02 00:42:49.257911 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.257916 | orchestrator | 2026-03-02 00:42:49.257921 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-02 00:42:49.257926 | orchestrator | Monday 02 March 2026 00:42:45 +0000 (0:00:00.115) 0:00:32.452 ********** 2026-03-02 00:42:49.257934 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.257942 | orchestrator | 2026-03-02 00:42:49.257952 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-02 00:42:49.257959 | orchestrator | Monday 02 March 2026 00:42:46 +0000 (0:00:00.282) 0:00:32.734 ********** 2026-03-02 00:42:49.257967 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:42:49.257976 | orchestrator | 2026-03-02 00:42:49.257982 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-02 00:42:49.257986 | orchestrator | Monday 02 March 2026 00:42:46 +0000 (0:00:00.124) 0:00:32.859 ********** 2026-03-02 00:42:49.257993 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c1d64d47-37ed-5019-b7d5-718691437d08'}}) 2026-03-02 00:42:49.258004 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3d7235d6-f117-525f-ba2d-9ab371851486'}}) 2026-03-02 00:42:49.258055 | orchestrator | 2026-03-02 00:42:49.258063 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-02 00:42:49.258068 | orchestrator | Monday 02 March 2026 00:42:46 +0000 (0:00:00.176) 0:00:33.035 ********** 2026-03-02 00:42:49.258074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c1d64d47-37ed-5019-b7d5-718691437d08'}})  2026-03-02 00:42:49.258101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3d7235d6-f117-525f-ba2d-9ab371851486'}})  2026-03-02 00:42:49.258111 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.258118 | orchestrator | 2026-03-02 00:42:49.258127 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-02 00:42:49.258133 | orchestrator | Monday 02 March 2026 00:42:46 +0000 (0:00:00.157) 0:00:33.192 ********** 2026-03-02 00:42:49.258139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c1d64d47-37ed-5019-b7d5-718691437d08'}})  2026-03-02 00:42:49.258144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3d7235d6-f117-525f-ba2d-9ab371851486'}})  2026-03-02 00:42:49.258149 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.258155 | orchestrator | 2026-03-02 00:42:49.258162 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-02 00:42:49.258171 | orchestrator | Monday 02 March 2026 00:42:46 +0000 (0:00:00.146) 0:00:33.339 ********** 2026-03-02 00:42:49.258179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c1d64d47-37ed-5019-b7d5-718691437d08'}})  2026-03-02 00:42:49.258187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3d7235d6-f117-525f-ba2d-9ab371851486'}})  2026-03-02 00:42:49.258231 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.258237 | orchestrator | 2026-03-02 00:42:49.258242 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-02 00:42:49.258247 | orchestrator | Monday 02 March 2026 00:42:47 +0000 (0:00:00.133) 0:00:33.473 ********** 2026-03-02 00:42:49.258252 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:42:49.258257 | orchestrator | 2026-03-02 00:42:49.258262 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-02 00:42:49.258268 | orchestrator | Monday 02 March 2026 00:42:47 +0000 (0:00:00.126) 0:00:33.599 ********** 2026-03-02 00:42:49.258273 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:42:49.258278 | orchestrator | 2026-03-02 00:42:49.258283 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-02 00:42:49.258288 | orchestrator | Monday 02 March 2026 00:42:47 +0000 (0:00:00.133) 0:00:33.732 ********** 2026-03-02 00:42:49.258293 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.258298 | orchestrator | 2026-03-02 00:42:49.258304 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-02 00:42:49.258310 | orchestrator | Monday 02 March 2026 00:42:47 +0000 (0:00:00.127) 0:00:33.860 ********** 2026-03-02 00:42:49.258315 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.258321 | orchestrator | 2026-03-02 00:42:49.258327 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-02 00:42:49.258333 | orchestrator | Monday 02 March 2026 00:42:47 +0000 (0:00:00.132) 0:00:33.992 ********** 2026-03-02 00:42:49.258339 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.258345 | orchestrator | 2026-03-02 00:42:49.258351 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-02 00:42:49.258357 | orchestrator | Monday 02 March 2026 00:42:47 +0000 (0:00:00.137) 0:00:34.130 ********** 2026-03-02 00:42:49.258363 | orchestrator | ok: [testbed-node-5] => { 2026-03-02 00:42:49.258368 | orchestrator |  "ceph_osd_devices": { 2026-03-02 00:42:49.258374 | orchestrator |  "sdb": { 2026-03-02 00:42:49.258393 | orchestrator |  "osd_lvm_uuid": "c1d64d47-37ed-5019-b7d5-718691437d08" 2026-03-02 00:42:49.258399 | orchestrator |  }, 2026-03-02 00:42:49.258405 | orchestrator |  "sdc": { 2026-03-02 00:42:49.258425 | orchestrator |  "osd_lvm_uuid": "3d7235d6-f117-525f-ba2d-9ab371851486" 2026-03-02 00:42:49.258431 | orchestrator |  } 2026-03-02 00:42:49.258437 | orchestrator |  } 2026-03-02 00:42:49.258444 | orchestrator | } 2026-03-02 00:42:49.258449 | orchestrator | 2026-03-02 00:42:49.258461 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-02 00:42:49.258466 | orchestrator | Monday 02 March 2026 00:42:47 +0000 (0:00:00.137) 0:00:34.268 ********** 2026-03-02 00:42:49.258471 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.258476 | orchestrator | 2026-03-02 00:42:49.258481 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-02 00:42:49.258486 | orchestrator | Monday 02 March 2026 00:42:48 +0000 (0:00:00.237) 0:00:34.505 ********** 2026-03-02 00:42:49.258491 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.258496 | orchestrator | 2026-03-02 00:42:49.258501 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-02 00:42:49.258506 | orchestrator | Monday 02 March 2026 00:42:48 +0000 (0:00:00.132) 0:00:34.638 ********** 2026-03-02 00:42:49.258511 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:42:49.258516 | orchestrator | 2026-03-02 00:42:49.258521 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-02 00:42:49.258526 | orchestrator | Monday 02 March 2026 00:42:48 +0000 (0:00:00.109) 0:00:34.748 ********** 2026-03-02 00:42:49.258531 | orchestrator | changed: [testbed-node-5] => { 2026-03-02 00:42:49.258536 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-02 00:42:49.258541 | orchestrator |  "ceph_osd_devices": { 2026-03-02 00:42:49.258547 | orchestrator |  "sdb": { 2026-03-02 00:42:49.258552 | orchestrator |  "osd_lvm_uuid": "c1d64d47-37ed-5019-b7d5-718691437d08" 2026-03-02 00:42:49.258557 | orchestrator |  }, 2026-03-02 00:42:49.258562 | orchestrator |  "sdc": { 2026-03-02 00:42:49.258570 | orchestrator |  "osd_lvm_uuid": "3d7235d6-f117-525f-ba2d-9ab371851486" 2026-03-02 00:42:49.258575 | orchestrator |  } 2026-03-02 00:42:49.258580 | orchestrator |  }, 2026-03-02 00:42:49.258585 | orchestrator |  "lvm_volumes": [ 2026-03-02 00:42:49.258590 | orchestrator |  { 2026-03-02 00:42:49.258595 | orchestrator |  "data": "osd-block-c1d64d47-37ed-5019-b7d5-718691437d08", 2026-03-02 00:42:49.258601 | orchestrator |  "data_vg": "ceph-c1d64d47-37ed-5019-b7d5-718691437d08" 2026-03-02 00:42:49.258606 | orchestrator |  }, 2026-03-02 00:42:49.258613 | orchestrator |  { 2026-03-02 00:42:49.258619 | orchestrator |  "data": "osd-block-3d7235d6-f117-525f-ba2d-9ab371851486", 2026-03-02 00:42:49.258624 | orchestrator |  "data_vg": "ceph-3d7235d6-f117-525f-ba2d-9ab371851486" 2026-03-02 00:42:49.258629 | orchestrator |  } 2026-03-02 00:42:49.258634 | orchestrator |  ] 2026-03-02 00:42:49.258639 | orchestrator |  } 2026-03-02 00:42:49.258644 | orchestrator | } 2026-03-02 00:42:49.258649 | orchestrator | 2026-03-02 00:42:49.258654 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-02 00:42:49.258659 | orchestrator | Monday 02 March 2026 00:42:48 +0000 (0:00:00.200) 0:00:34.948 ********** 2026-03-02 00:42:49.258664 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-02 00:42:49.258669 | orchestrator | 2026-03-02 00:42:49.258674 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:42:49.258679 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-02 00:42:49.258685 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-02 00:42:49.258690 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-02 00:42:49.258696 | orchestrator | 2026-03-02 00:42:49.258701 | orchestrator | 2026-03-02 00:42:49.258705 | orchestrator | 2026-03-02 00:42:49.258711 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:42:49.258715 | orchestrator | Monday 02 March 2026 00:42:49 +0000 (0:00:00.758) 0:00:35.707 ********** 2026-03-02 00:42:49.258724 | orchestrator | =============================================================================== 2026-03-02 00:42:49.258730 | orchestrator | Write configuration file ------------------------------------------------ 3.33s 2026-03-02 00:42:49.258735 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2026-03-02 00:42:49.258740 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2026-03-02 00:42:49.258744 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-03-02 00:42:49.258749 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.98s 2026-03-02 00:42:49.258754 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-03-02 00:42:49.258759 | orchestrator | Print configuration data ------------------------------------------------ 0.67s 2026-03-02 00:42:49.258764 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-02 00:42:49.258769 | orchestrator | Get initial list of available block devices ----------------------------- 0.62s 2026-03-02 00:42:49.258774 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-03-02 00:42:49.258779 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-03-02 00:42:49.258784 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.57s 2026-03-02 00:42:49.258790 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-03-02 00:42:49.258798 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-03-02 00:42:49.479498 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2026-03-02 00:42:49.479605 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.50s 2026-03-02 00:42:49.479624 | orchestrator | Add known partitions to the list of available block devices ------------- 0.48s 2026-03-02 00:42:49.479639 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.48s 2026-03-02 00:42:49.479648 | orchestrator | Add known partitions to the list of available block devices ------------- 0.47s 2026-03-02 00:42:49.479656 | orchestrator | Set DB devices config data ---------------------------------------------- 0.47s 2026-03-02 00:43:12.061633 | orchestrator | 2026-03-02 00:43:12 | INFO  | Task a95bfb26-0129-417c-ace3-154e324e37d6 (sync inventory) is running in background. Output coming soon. 2026-03-02 00:43:37.035739 | orchestrator | 2026-03-02 00:43:13 | INFO  | Starting group_vars file reorganization 2026-03-02 00:43:37.035875 | orchestrator | 2026-03-02 00:43:13 | INFO  | Moved 0 file(s) to their respective directories 2026-03-02 00:43:37.035907 | orchestrator | 2026-03-02 00:43:13 | INFO  | Group_vars file reorganization completed 2026-03-02 00:43:37.035927 | orchestrator | 2026-03-02 00:43:16 | INFO  | Starting variable preparation from inventory 2026-03-02 00:43:37.035950 | orchestrator | 2026-03-02 00:43:19 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-02 00:43:37.035972 | orchestrator | 2026-03-02 00:43:19 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-02 00:43:37.035994 | orchestrator | 2026-03-02 00:43:19 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-02 00:43:37.036013 | orchestrator | 2026-03-02 00:43:19 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-02 00:43:37.036032 | orchestrator | 2026-03-02 00:43:19 | INFO  | Variable preparation completed 2026-03-02 00:43:37.036050 | orchestrator | 2026-03-02 00:43:20 | INFO  | Starting inventory overwrite handling 2026-03-02 00:43:37.036068 | orchestrator | 2026-03-02 00:43:20 | INFO  | Handling group overwrites in 99-overwrite 2026-03-02 00:43:37.036088 | orchestrator | 2026-03-02 00:43:20 | INFO  | Removing group frr:children from 60-generic 2026-03-02 00:43:37.036147 | orchestrator | 2026-03-02 00:43:20 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-02 00:43:37.036168 | orchestrator | 2026-03-02 00:43:20 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-02 00:43:37.036188 | orchestrator | 2026-03-02 00:43:20 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-02 00:43:37.036206 | orchestrator | 2026-03-02 00:43:20 | INFO  | Handling group overwrites in 20-roles 2026-03-02 00:43:37.036226 | orchestrator | 2026-03-02 00:43:20 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-02 00:43:37.036293 | orchestrator | 2026-03-02 00:43:20 | INFO  | Removed 5 group(s) in total 2026-03-02 00:43:37.036312 | orchestrator | 2026-03-02 00:43:20 | INFO  | Inventory overwrite handling completed 2026-03-02 00:43:37.036330 | orchestrator | 2026-03-02 00:43:21 | INFO  | Starting merge of inventory files 2026-03-02 00:43:37.036346 | orchestrator | 2026-03-02 00:43:21 | INFO  | Inventory files merged successfully 2026-03-02 00:43:37.036364 | orchestrator | 2026-03-02 00:43:26 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-02 00:43:37.036382 | orchestrator | 2026-03-02 00:43:35 | INFO  | Successfully wrote ClusterShell configuration 2026-03-02 00:43:37.036401 | orchestrator | [master 43ea167] 2026-03-02-00-43 2026-03-02 00:43:37.036418 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-02 00:43:39.001563 | orchestrator | 2026-03-02 00:43:39 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-02 00:43:39.049162 | orchestrator | 2026-03-02 00:43:39 | INFO  | Task 4deec52b-5490-495a-bc94-ac4b8282ffcb (ceph-create-lvm-devices) was prepared for execution. 2026-03-02 00:43:39.049262 | orchestrator | 2026-03-02 00:43:39 | INFO  | It takes a moment until task 4deec52b-5490-495a-bc94-ac4b8282ffcb (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-02 00:43:49.150781 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-02 00:43:49.150906 | orchestrator | 2.16.14 2026-03-02 00:43:49.150922 | orchestrator | 2026-03-02 00:43:49.150931 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-02 00:43:49.150939 | orchestrator | 2026-03-02 00:43:49.150947 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-02 00:43:49.150955 | orchestrator | Monday 02 March 2026 00:43:42 +0000 (0:00:00.273) 0:00:00.273 ********** 2026-03-02 00:43:49.150963 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-02 00:43:49.150971 | orchestrator | 2026-03-02 00:43:49.150979 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-02 00:43:49.150986 | orchestrator | Monday 02 March 2026 00:43:42 +0000 (0:00:00.235) 0:00:00.509 ********** 2026-03-02 00:43:49.150994 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:43:49.151002 | orchestrator | 2026-03-02 00:43:49.151009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151016 | orchestrator | Monday 02 March 2026 00:43:42 +0000 (0:00:00.213) 0:00:00.723 ********** 2026-03-02 00:43:49.151024 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-02 00:43:49.151031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-02 00:43:49.151039 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-02 00:43:49.151044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-02 00:43:49.151049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-02 00:43:49.151053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-02 00:43:49.151058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-02 00:43:49.151081 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-02 00:43:49.151085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-02 00:43:49.151090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-02 00:43:49.151094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-02 00:43:49.151098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-02 00:43:49.151113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-02 00:43:49.151118 | orchestrator | 2026-03-02 00:43:49.151122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151126 | orchestrator | Monday 02 March 2026 00:43:43 +0000 (0:00:00.468) 0:00:01.192 ********** 2026-03-02 00:43:49.151131 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151135 | orchestrator | 2026-03-02 00:43:49.151139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151144 | orchestrator | Monday 02 March 2026 00:43:43 +0000 (0:00:00.180) 0:00:01.372 ********** 2026-03-02 00:43:49.151148 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151153 | orchestrator | 2026-03-02 00:43:49.151159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151166 | orchestrator | Monday 02 March 2026 00:43:43 +0000 (0:00:00.170) 0:00:01.542 ********** 2026-03-02 00:43:49.151172 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151179 | orchestrator | 2026-03-02 00:43:49.151185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151192 | orchestrator | Monday 02 March 2026 00:43:43 +0000 (0:00:00.173) 0:00:01.715 ********** 2026-03-02 00:43:49.151197 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151205 | orchestrator | 2026-03-02 00:43:49.151210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151214 | orchestrator | Monday 02 March 2026 00:43:44 +0000 (0:00:00.188) 0:00:01.904 ********** 2026-03-02 00:43:49.151219 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151223 | orchestrator | 2026-03-02 00:43:49.151227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151231 | orchestrator | Monday 02 March 2026 00:43:44 +0000 (0:00:00.201) 0:00:02.106 ********** 2026-03-02 00:43:49.151236 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151271 | orchestrator | 2026-03-02 00:43:49.151275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151280 | orchestrator | Monday 02 March 2026 00:43:44 +0000 (0:00:00.189) 0:00:02.295 ********** 2026-03-02 00:43:49.151284 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151288 | orchestrator | 2026-03-02 00:43:49.151292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151296 | orchestrator | Monday 02 March 2026 00:43:44 +0000 (0:00:00.186) 0:00:02.482 ********** 2026-03-02 00:43:49.151301 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151305 | orchestrator | 2026-03-02 00:43:49.151310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151314 | orchestrator | Monday 02 March 2026 00:43:44 +0000 (0:00:00.178) 0:00:02.661 ********** 2026-03-02 00:43:49.151318 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8) 2026-03-02 00:43:49.151324 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8) 2026-03-02 00:43:49.151328 | orchestrator | 2026-03-02 00:43:49.151333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151352 | orchestrator | Monday 02 March 2026 00:43:45 +0000 (0:00:00.378) 0:00:03.039 ********** 2026-03-02 00:43:49.151363 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7) 2026-03-02 00:43:49.151368 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7) 2026-03-02 00:43:49.151372 | orchestrator | 2026-03-02 00:43:49.151376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151381 | orchestrator | Monday 02 March 2026 00:43:45 +0000 (0:00:00.519) 0:00:03.558 ********** 2026-03-02 00:43:49.151385 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311) 2026-03-02 00:43:49.151389 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311) 2026-03-02 00:43:49.151393 | orchestrator | 2026-03-02 00:43:49.151398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151402 | orchestrator | Monday 02 March 2026 00:43:46 +0000 (0:00:00.646) 0:00:04.205 ********** 2026-03-02 00:43:49.151406 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f) 2026-03-02 00:43:49.151410 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f) 2026-03-02 00:43:49.151415 | orchestrator | 2026-03-02 00:43:49.151419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:43:49.151423 | orchestrator | Monday 02 March 2026 00:43:47 +0000 (0:00:00.830) 0:00:05.035 ********** 2026-03-02 00:43:49.151427 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-02 00:43:49.151432 | orchestrator | 2026-03-02 00:43:49.151436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:49.151440 | orchestrator | Monday 02 March 2026 00:43:47 +0000 (0:00:00.267) 0:00:05.302 ********** 2026-03-02 00:43:49.151445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-02 00:43:49.151449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-02 00:43:49.151453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-02 00:43:49.151457 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-02 00:43:49.151462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-02 00:43:49.151466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-02 00:43:49.151471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-02 00:43:49.151475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-02 00:43:49.151480 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-02 00:43:49.151484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-02 00:43:49.151488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-02 00:43:49.151493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-02 00:43:49.151497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-02 00:43:49.151501 | orchestrator | 2026-03-02 00:43:49.151505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:49.151509 | orchestrator | Monday 02 March 2026 00:43:47 +0000 (0:00:00.378) 0:00:05.681 ********** 2026-03-02 00:43:49.151514 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151518 | orchestrator | 2026-03-02 00:43:49.151522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:49.151527 | orchestrator | Monday 02 March 2026 00:43:48 +0000 (0:00:00.171) 0:00:05.853 ********** 2026-03-02 00:43:49.151535 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151539 | orchestrator | 2026-03-02 00:43:49.151543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:49.151547 | orchestrator | Monday 02 March 2026 00:43:48 +0000 (0:00:00.153) 0:00:06.006 ********** 2026-03-02 00:43:49.151552 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151556 | orchestrator | 2026-03-02 00:43:49.151560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:49.151565 | orchestrator | Monday 02 March 2026 00:43:48 +0000 (0:00:00.165) 0:00:06.171 ********** 2026-03-02 00:43:49.151569 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151573 | orchestrator | 2026-03-02 00:43:49.151577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:49.151582 | orchestrator | Monday 02 March 2026 00:43:48 +0000 (0:00:00.171) 0:00:06.342 ********** 2026-03-02 00:43:49.151586 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151590 | orchestrator | 2026-03-02 00:43:49.151594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:49.151604 | orchestrator | Monday 02 March 2026 00:43:48 +0000 (0:00:00.207) 0:00:06.550 ********** 2026-03-02 00:43:49.151608 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151613 | orchestrator | 2026-03-02 00:43:49.151617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:49.151621 | orchestrator | Monday 02 March 2026 00:43:48 +0000 (0:00:00.192) 0:00:06.743 ********** 2026-03-02 00:43:49.151626 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:49.151630 | orchestrator | 2026-03-02 00:43:49.151637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:56.720534 | orchestrator | Monday 02 March 2026 00:43:49 +0000 (0:00:00.198) 0:00:06.941 ********** 2026-03-02 00:43:56.720629 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.720639 | orchestrator | 2026-03-02 00:43:56.720647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:56.720653 | orchestrator | Monday 02 March 2026 00:43:49 +0000 (0:00:00.180) 0:00:07.123 ********** 2026-03-02 00:43:56.720660 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-02 00:43:56.720667 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-02 00:43:56.720673 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-02 00:43:56.720679 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-02 00:43:56.720685 | orchestrator | 2026-03-02 00:43:56.720691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:56.720697 | orchestrator | Monday 02 March 2026 00:43:50 +0000 (0:00:01.037) 0:00:08.160 ********** 2026-03-02 00:43:56.720702 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.720708 | orchestrator | 2026-03-02 00:43:56.720715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:56.720721 | orchestrator | Monday 02 March 2026 00:43:50 +0000 (0:00:00.204) 0:00:08.364 ********** 2026-03-02 00:43:56.720728 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.720734 | orchestrator | 2026-03-02 00:43:56.720741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:56.720747 | orchestrator | Monday 02 March 2026 00:43:50 +0000 (0:00:00.198) 0:00:08.562 ********** 2026-03-02 00:43:56.720753 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.720760 | orchestrator | 2026-03-02 00:43:56.720766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:43:56.720772 | orchestrator | Monday 02 March 2026 00:43:50 +0000 (0:00:00.193) 0:00:08.755 ********** 2026-03-02 00:43:56.720778 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.720784 | orchestrator | 2026-03-02 00:43:56.720790 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-02 00:43:56.720796 | orchestrator | Monday 02 March 2026 00:43:51 +0000 (0:00:00.182) 0:00:08.938 ********** 2026-03-02 00:43:56.720802 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.720829 | orchestrator | 2026-03-02 00:43:56.720835 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-02 00:43:56.720841 | orchestrator | Monday 02 March 2026 00:43:51 +0000 (0:00:00.123) 0:00:09.062 ********** 2026-03-02 00:43:56.720849 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '271875e3-8908-5e0e-b413-64afee9519da'}}) 2026-03-02 00:43:56.720856 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '52125f52-6af3-5290-9fed-9584660c39a2'}}) 2026-03-02 00:43:56.720862 | orchestrator | 2026-03-02 00:43:56.720881 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-02 00:43:56.720888 | orchestrator | Monday 02 March 2026 00:43:51 +0000 (0:00:00.178) 0:00:09.240 ********** 2026-03-02 00:43:56.720896 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'}) 2026-03-02 00:43:56.720903 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'}) 2026-03-02 00:43:56.720909 | orchestrator | 2026-03-02 00:43:56.720915 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-02 00:43:56.720922 | orchestrator | Monday 02 March 2026 00:43:53 +0000 (0:00:02.023) 0:00:11.264 ********** 2026-03-02 00:43:56.720928 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:43:56.720935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:43:56.720942 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.720947 | orchestrator | 2026-03-02 00:43:56.720954 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-02 00:43:56.720961 | orchestrator | Monday 02 March 2026 00:43:53 +0000 (0:00:00.155) 0:00:11.419 ********** 2026-03-02 00:43:56.720967 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'}) 2026-03-02 00:43:56.720973 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'}) 2026-03-02 00:43:56.720980 | orchestrator | 2026-03-02 00:43:56.720986 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-02 00:43:56.720991 | orchestrator | Monday 02 March 2026 00:43:55 +0000 (0:00:01.406) 0:00:12.825 ********** 2026-03-02 00:43:56.720997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:43:56.721004 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:43:56.721010 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.721017 | orchestrator | 2026-03-02 00:43:56.721023 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-02 00:43:56.721029 | orchestrator | Monday 02 March 2026 00:43:55 +0000 (0:00:00.135) 0:00:12.960 ********** 2026-03-02 00:43:56.721049 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.721056 | orchestrator | 2026-03-02 00:43:56.721062 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-02 00:43:56.721068 | orchestrator | Monday 02 March 2026 00:43:55 +0000 (0:00:00.126) 0:00:13.087 ********** 2026-03-02 00:43:56.721074 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:43:56.721081 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:43:56.721094 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.721100 | orchestrator | 2026-03-02 00:43:56.721107 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-02 00:43:56.721113 | orchestrator | Monday 02 March 2026 00:43:55 +0000 (0:00:00.280) 0:00:13.368 ********** 2026-03-02 00:43:56.721119 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.721126 | orchestrator | 2026-03-02 00:43:56.721133 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-02 00:43:56.721140 | orchestrator | Monday 02 March 2026 00:43:55 +0000 (0:00:00.129) 0:00:13.497 ********** 2026-03-02 00:43:56.721147 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:43:56.721154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:43:56.721161 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.721167 | orchestrator | 2026-03-02 00:43:56.721173 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-02 00:43:56.721180 | orchestrator | Monday 02 March 2026 00:43:55 +0000 (0:00:00.137) 0:00:13.634 ********** 2026-03-02 00:43:56.721186 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.721192 | orchestrator | 2026-03-02 00:43:56.721199 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-02 00:43:56.721205 | orchestrator | Monday 02 March 2026 00:43:55 +0000 (0:00:00.123) 0:00:13.758 ********** 2026-03-02 00:43:56.721211 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:43:56.721218 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:43:56.721224 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.721230 | orchestrator | 2026-03-02 00:43:56.721237 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-02 00:43:56.721269 | orchestrator | Monday 02 March 2026 00:43:56 +0000 (0:00:00.135) 0:00:13.893 ********** 2026-03-02 00:43:56.721276 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:43:56.721282 | orchestrator | 2026-03-02 00:43:56.721288 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-02 00:43:56.721295 | orchestrator | Monday 02 March 2026 00:43:56 +0000 (0:00:00.118) 0:00:14.012 ********** 2026-03-02 00:43:56.721302 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:43:56.721308 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:43:56.721315 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.721321 | orchestrator | 2026-03-02 00:43:56.721328 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-02 00:43:56.721334 | orchestrator | Monday 02 March 2026 00:43:56 +0000 (0:00:00.125) 0:00:14.137 ********** 2026-03-02 00:43:56.721340 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:43:56.721347 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:43:56.721353 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.721360 | orchestrator | 2026-03-02 00:43:56.721366 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-02 00:43:56.721377 | orchestrator | Monday 02 March 2026 00:43:56 +0000 (0:00:00.132) 0:00:14.270 ********** 2026-03-02 00:43:56.721384 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:43:56.721390 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:43:56.721397 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.721403 | orchestrator | 2026-03-02 00:43:56.721409 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-02 00:43:56.721416 | orchestrator | Monday 02 March 2026 00:43:56 +0000 (0:00:00.130) 0:00:14.400 ********** 2026-03-02 00:43:56.721422 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:43:56.721429 | orchestrator | 2026-03-02 00:43:56.721435 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-02 00:43:56.721449 | orchestrator | Monday 02 March 2026 00:43:56 +0000 (0:00:00.109) 0:00:14.510 ********** 2026-03-02 00:44:02.759017 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.759114 | orchestrator | 2026-03-02 00:44:02.759127 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-02 00:44:02.759137 | orchestrator | Monday 02 March 2026 00:43:56 +0000 (0:00:00.112) 0:00:14.622 ********** 2026-03-02 00:44:02.759145 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.759153 | orchestrator | 2026-03-02 00:44:02.759165 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-02 00:44:02.759191 | orchestrator | Monday 02 March 2026 00:43:56 +0000 (0:00:00.117) 0:00:14.740 ********** 2026-03-02 00:44:02.759206 | orchestrator | ok: [testbed-node-3] => { 2026-03-02 00:44:02.759219 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-02 00:44:02.759232 | orchestrator | } 2026-03-02 00:44:02.759244 | orchestrator | 2026-03-02 00:44:02.759350 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-02 00:44:02.759364 | orchestrator | Monday 02 March 2026 00:43:57 +0000 (0:00:00.272) 0:00:15.013 ********** 2026-03-02 00:44:02.759377 | orchestrator | ok: [testbed-node-3] => { 2026-03-02 00:44:02.759392 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-02 00:44:02.759405 | orchestrator | } 2026-03-02 00:44:02.759418 | orchestrator | 2026-03-02 00:44:02.759430 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-02 00:44:02.759443 | orchestrator | Monday 02 March 2026 00:43:57 +0000 (0:00:00.142) 0:00:15.155 ********** 2026-03-02 00:44:02.759455 | orchestrator | ok: [testbed-node-3] => { 2026-03-02 00:44:02.759468 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-02 00:44:02.759481 | orchestrator | } 2026-03-02 00:44:02.759495 | orchestrator | 2026-03-02 00:44:02.759507 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-02 00:44:02.759521 | orchestrator | Monday 02 March 2026 00:43:57 +0000 (0:00:00.139) 0:00:15.294 ********** 2026-03-02 00:44:02.759536 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:44:02.759549 | orchestrator | 2026-03-02 00:44:02.759564 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-02 00:44:02.759574 | orchestrator | Monday 02 March 2026 00:43:58 +0000 (0:00:00.661) 0:00:15.956 ********** 2026-03-02 00:44:02.759584 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:44:02.759593 | orchestrator | 2026-03-02 00:44:02.759602 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-02 00:44:02.759612 | orchestrator | Monday 02 March 2026 00:43:58 +0000 (0:00:00.527) 0:00:16.483 ********** 2026-03-02 00:44:02.759621 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:44:02.759629 | orchestrator | 2026-03-02 00:44:02.759638 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-02 00:44:02.759648 | orchestrator | Monday 02 March 2026 00:43:59 +0000 (0:00:00.535) 0:00:17.019 ********** 2026-03-02 00:44:02.759657 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:44:02.759666 | orchestrator | 2026-03-02 00:44:02.759699 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-02 00:44:02.759709 | orchestrator | Monday 02 March 2026 00:43:59 +0000 (0:00:00.135) 0:00:17.154 ********** 2026-03-02 00:44:02.759718 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.759728 | orchestrator | 2026-03-02 00:44:02.759737 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-02 00:44:02.759748 | orchestrator | Monday 02 March 2026 00:43:59 +0000 (0:00:00.099) 0:00:17.254 ********** 2026-03-02 00:44:02.759760 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.759771 | orchestrator | 2026-03-02 00:44:02.759782 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-02 00:44:02.759793 | orchestrator | Monday 02 March 2026 00:43:59 +0000 (0:00:00.109) 0:00:17.364 ********** 2026-03-02 00:44:02.759805 | orchestrator | ok: [testbed-node-3] => { 2026-03-02 00:44:02.759816 | orchestrator |  "vgs_report": { 2026-03-02 00:44:02.759827 | orchestrator |  "vg": [] 2026-03-02 00:44:02.759838 | orchestrator |  } 2026-03-02 00:44:02.759850 | orchestrator | } 2026-03-02 00:44:02.759861 | orchestrator | 2026-03-02 00:44:02.759871 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-02 00:44:02.759883 | orchestrator | Monday 02 March 2026 00:43:59 +0000 (0:00:00.129) 0:00:17.493 ********** 2026-03-02 00:44:02.759985 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.759997 | orchestrator | 2026-03-02 00:44:02.760008 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-02 00:44:02.760025 | orchestrator | Monday 02 March 2026 00:43:59 +0000 (0:00:00.113) 0:00:17.606 ********** 2026-03-02 00:44:02.760039 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760065 | orchestrator | 2026-03-02 00:44:02.760082 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-02 00:44:02.760098 | orchestrator | Monday 02 March 2026 00:43:59 +0000 (0:00:00.134) 0:00:17.741 ********** 2026-03-02 00:44:02.760113 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760128 | orchestrator | 2026-03-02 00:44:02.760144 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-02 00:44:02.760160 | orchestrator | Monday 02 March 2026 00:44:00 +0000 (0:00:00.270) 0:00:18.012 ********** 2026-03-02 00:44:02.760173 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760190 | orchestrator | 2026-03-02 00:44:02.760206 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-02 00:44:02.760222 | orchestrator | Monday 02 March 2026 00:44:00 +0000 (0:00:00.131) 0:00:18.143 ********** 2026-03-02 00:44:02.760238 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760326 | orchestrator | 2026-03-02 00:44:02.760338 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-02 00:44:02.760347 | orchestrator | Monday 02 March 2026 00:44:00 +0000 (0:00:00.141) 0:00:18.285 ********** 2026-03-02 00:44:02.760356 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760366 | orchestrator | 2026-03-02 00:44:02.760375 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-02 00:44:02.760385 | orchestrator | Monday 02 March 2026 00:44:00 +0000 (0:00:00.134) 0:00:18.420 ********** 2026-03-02 00:44:02.760394 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760404 | orchestrator | 2026-03-02 00:44:02.760413 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-02 00:44:02.760423 | orchestrator | Monday 02 March 2026 00:44:00 +0000 (0:00:00.113) 0:00:18.533 ********** 2026-03-02 00:44:02.760454 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760464 | orchestrator | 2026-03-02 00:44:02.760473 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-02 00:44:02.760483 | orchestrator | Monday 02 March 2026 00:44:00 +0000 (0:00:00.146) 0:00:18.680 ********** 2026-03-02 00:44:02.760492 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760502 | orchestrator | 2026-03-02 00:44:02.760514 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-02 00:44:02.760542 | orchestrator | Monday 02 March 2026 00:44:01 +0000 (0:00:00.171) 0:00:18.851 ********** 2026-03-02 00:44:02.760552 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760562 | orchestrator | 2026-03-02 00:44:02.760571 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-02 00:44:02.760581 | orchestrator | Monday 02 March 2026 00:44:01 +0000 (0:00:00.153) 0:00:19.005 ********** 2026-03-02 00:44:02.760590 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760600 | orchestrator | 2026-03-02 00:44:02.760609 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-02 00:44:02.760619 | orchestrator | Monday 02 March 2026 00:44:01 +0000 (0:00:00.142) 0:00:19.147 ********** 2026-03-02 00:44:02.760628 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760638 | orchestrator | 2026-03-02 00:44:02.760653 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-02 00:44:02.760665 | orchestrator | Monday 02 March 2026 00:44:01 +0000 (0:00:00.146) 0:00:19.294 ********** 2026-03-02 00:44:02.760674 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760684 | orchestrator | 2026-03-02 00:44:02.760693 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-02 00:44:02.760703 | orchestrator | Monday 02 March 2026 00:44:01 +0000 (0:00:00.124) 0:00:19.418 ********** 2026-03-02 00:44:02.760712 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760721 | orchestrator | 2026-03-02 00:44:02.760731 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-02 00:44:02.760740 | orchestrator | Monday 02 March 2026 00:44:01 +0000 (0:00:00.128) 0:00:19.547 ********** 2026-03-02 00:44:02.760751 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:44:02.760762 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:44:02.760772 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760782 | orchestrator | 2026-03-02 00:44:02.760791 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-02 00:44:02.760807 | orchestrator | Monday 02 March 2026 00:44:02 +0000 (0:00:00.312) 0:00:19.859 ********** 2026-03-02 00:44:02.760817 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:44:02.760827 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:44:02.760837 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760848 | orchestrator | 2026-03-02 00:44:02.760865 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-02 00:44:02.760881 | orchestrator | Monday 02 March 2026 00:44:02 +0000 (0:00:00.155) 0:00:20.014 ********** 2026-03-02 00:44:02.760896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:44:02.760913 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:44:02.760929 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.760944 | orchestrator | 2026-03-02 00:44:02.760959 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-02 00:44:02.760975 | orchestrator | Monday 02 March 2026 00:44:02 +0000 (0:00:00.152) 0:00:20.166 ********** 2026-03-02 00:44:02.760991 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:44:02.761008 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:44:02.761037 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.761053 | orchestrator | 2026-03-02 00:44:02.761070 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-02 00:44:02.761080 | orchestrator | Monday 02 March 2026 00:44:02 +0000 (0:00:00.153) 0:00:20.319 ********** 2026-03-02 00:44:02.761089 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:44:02.761099 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:44:02.761108 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:02.761118 | orchestrator | 2026-03-02 00:44:02.761127 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-02 00:44:02.761136 | orchestrator | Monday 02 March 2026 00:44:02 +0000 (0:00:00.166) 0:00:20.486 ********** 2026-03-02 00:44:02.761155 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:44:08.546363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:44:08.546456 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:08.546468 | orchestrator | 2026-03-02 00:44:08.546477 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-02 00:44:08.546487 | orchestrator | Monday 02 March 2026 00:44:02 +0000 (0:00:00.143) 0:00:20.629 ********** 2026-03-02 00:44:08.546495 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:44:08.546504 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:44:08.546512 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:08.546520 | orchestrator | 2026-03-02 00:44:08.546528 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-02 00:44:08.546536 | orchestrator | Monday 02 March 2026 00:44:03 +0000 (0:00:00.166) 0:00:20.796 ********** 2026-03-02 00:44:08.546544 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:44:08.546551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:44:08.546559 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:08.546567 | orchestrator | 2026-03-02 00:44:08.546575 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-02 00:44:08.546583 | orchestrator | Monday 02 March 2026 00:44:03 +0000 (0:00:00.154) 0:00:20.950 ********** 2026-03-02 00:44:08.546591 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:44:08.546599 | orchestrator | 2026-03-02 00:44:08.546607 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-02 00:44:08.546615 | orchestrator | Monday 02 March 2026 00:44:03 +0000 (0:00:00.589) 0:00:21.539 ********** 2026-03-02 00:44:08.546622 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:44:08.546630 | orchestrator | 2026-03-02 00:44:08.546638 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-02 00:44:08.546646 | orchestrator | Monday 02 March 2026 00:44:04 +0000 (0:00:00.558) 0:00:22.098 ********** 2026-03-02 00:44:08.546654 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:44:08.546662 | orchestrator | 2026-03-02 00:44:08.546670 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-02 00:44:08.546678 | orchestrator | Monday 02 March 2026 00:44:04 +0000 (0:00:00.148) 0:00:22.247 ********** 2026-03-02 00:44:08.546705 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'vg_name': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'}) 2026-03-02 00:44:08.546714 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'vg_name': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'}) 2026-03-02 00:44:08.546722 | orchestrator | 2026-03-02 00:44:08.546730 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-02 00:44:08.546738 | orchestrator | Monday 02 March 2026 00:44:04 +0000 (0:00:00.188) 0:00:22.435 ********** 2026-03-02 00:44:08.546759 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:44:08.546768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:44:08.546776 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:08.546784 | orchestrator | 2026-03-02 00:44:08.546792 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-02 00:44:08.546800 | orchestrator | Monday 02 March 2026 00:44:04 +0000 (0:00:00.355) 0:00:22.791 ********** 2026-03-02 00:44:08.546808 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:44:08.546816 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:44:08.546824 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:08.546832 | orchestrator | 2026-03-02 00:44:08.546839 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-02 00:44:08.546847 | orchestrator | Monday 02 March 2026 00:44:05 +0000 (0:00:00.170) 0:00:22.961 ********** 2026-03-02 00:44:08.546855 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'})  2026-03-02 00:44:08.546864 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'})  2026-03-02 00:44:08.546874 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:44:08.546883 | orchestrator | 2026-03-02 00:44:08.546893 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-02 00:44:08.546902 | orchestrator | Monday 02 March 2026 00:44:05 +0000 (0:00:00.172) 0:00:23.134 ********** 2026-03-02 00:44:08.546926 | orchestrator | ok: [testbed-node-3] => { 2026-03-02 00:44:08.546937 | orchestrator |  "lvm_report": { 2026-03-02 00:44:08.546947 | orchestrator |  "lv": [ 2026-03-02 00:44:08.546957 | orchestrator |  { 2026-03-02 00:44:08.546966 | orchestrator |  "lv_name": "osd-block-271875e3-8908-5e0e-b413-64afee9519da", 2026-03-02 00:44:08.546976 | orchestrator |  "vg_name": "ceph-271875e3-8908-5e0e-b413-64afee9519da" 2026-03-02 00:44:08.546985 | orchestrator |  }, 2026-03-02 00:44:08.546995 | orchestrator |  { 2026-03-02 00:44:08.547004 | orchestrator |  "lv_name": "osd-block-52125f52-6af3-5290-9fed-9584660c39a2", 2026-03-02 00:44:08.547013 | orchestrator |  "vg_name": "ceph-52125f52-6af3-5290-9fed-9584660c39a2" 2026-03-02 00:44:08.547022 | orchestrator |  } 2026-03-02 00:44:08.547031 | orchestrator |  ], 2026-03-02 00:44:08.547041 | orchestrator |  "pv": [ 2026-03-02 00:44:08.547049 | orchestrator |  { 2026-03-02 00:44:08.547058 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-02 00:44:08.547068 | orchestrator |  "vg_name": "ceph-271875e3-8908-5e0e-b413-64afee9519da" 2026-03-02 00:44:08.547077 | orchestrator |  }, 2026-03-02 00:44:08.547086 | orchestrator |  { 2026-03-02 00:44:08.547102 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-02 00:44:08.547112 | orchestrator |  "vg_name": "ceph-52125f52-6af3-5290-9fed-9584660c39a2" 2026-03-02 00:44:08.547121 | orchestrator |  } 2026-03-02 00:44:08.547130 | orchestrator |  ] 2026-03-02 00:44:08.547139 | orchestrator |  } 2026-03-02 00:44:08.547148 | orchestrator | } 2026-03-02 00:44:08.547157 | orchestrator | 2026-03-02 00:44:08.547167 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-02 00:44:08.547176 | orchestrator | 2026-03-02 00:44:08.547186 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-02 00:44:08.547196 | orchestrator | Monday 02 March 2026 00:44:05 +0000 (0:00:00.269) 0:00:23.403 ********** 2026-03-02 00:44:08.547205 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-02 00:44:08.547215 | orchestrator | 2026-03-02 00:44:08.547224 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-02 00:44:08.547232 | orchestrator | Monday 02 March 2026 00:44:05 +0000 (0:00:00.291) 0:00:23.694 ********** 2026-03-02 00:44:08.547240 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:44:08.547248 | orchestrator | 2026-03-02 00:44:08.547305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:08.547314 | orchestrator | Monday 02 March 2026 00:44:06 +0000 (0:00:00.292) 0:00:23.987 ********** 2026-03-02 00:44:08.547327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-02 00:44:08.547335 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-02 00:44:08.547343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-02 00:44:08.547351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-02 00:44:08.547359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-02 00:44:08.547367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-02 00:44:08.547374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-02 00:44:08.547382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-02 00:44:08.547390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-02 00:44:08.547398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-02 00:44:08.547406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-02 00:44:08.547413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-02 00:44:08.547421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-02 00:44:08.547429 | orchestrator | 2026-03-02 00:44:08.547437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:08.547445 | orchestrator | Monday 02 March 2026 00:44:06 +0000 (0:00:00.523) 0:00:24.510 ********** 2026-03-02 00:44:08.547453 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:08.547460 | orchestrator | 2026-03-02 00:44:08.547468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:08.547476 | orchestrator | Monday 02 March 2026 00:44:06 +0000 (0:00:00.250) 0:00:24.761 ********** 2026-03-02 00:44:08.547484 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:08.547492 | orchestrator | 2026-03-02 00:44:08.547500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:08.547508 | orchestrator | Monday 02 March 2026 00:44:07 +0000 (0:00:00.214) 0:00:24.975 ********** 2026-03-02 00:44:08.547515 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:08.547523 | orchestrator | 2026-03-02 00:44:08.547531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:08.547547 | orchestrator | Monday 02 March 2026 00:44:07 +0000 (0:00:00.678) 0:00:25.653 ********** 2026-03-02 00:44:08.547555 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:08.547563 | orchestrator | 2026-03-02 00:44:08.547570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:08.547578 | orchestrator | Monday 02 March 2026 00:44:08 +0000 (0:00:00.208) 0:00:25.862 ********** 2026-03-02 00:44:08.547586 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:08.547594 | orchestrator | 2026-03-02 00:44:08.547602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:08.547610 | orchestrator | Monday 02 March 2026 00:44:08 +0000 (0:00:00.244) 0:00:26.108 ********** 2026-03-02 00:44:08.547618 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:08.547626 | orchestrator | 2026-03-02 00:44:08.547639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:19.760758 | orchestrator | Monday 02 March 2026 00:44:08 +0000 (0:00:00.228) 0:00:26.336 ********** 2026-03-02 00:44:19.760856 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.760871 | orchestrator | 2026-03-02 00:44:19.760882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:19.760891 | orchestrator | Monday 02 March 2026 00:44:08 +0000 (0:00:00.219) 0:00:26.556 ********** 2026-03-02 00:44:19.760900 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.760909 | orchestrator | 2026-03-02 00:44:19.760918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:19.760927 | orchestrator | Monday 02 March 2026 00:44:08 +0000 (0:00:00.225) 0:00:26.781 ********** 2026-03-02 00:44:19.760936 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba) 2026-03-02 00:44:19.760946 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba) 2026-03-02 00:44:19.760954 | orchestrator | 2026-03-02 00:44:19.760963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:19.760972 | orchestrator | Monday 02 March 2026 00:44:09 +0000 (0:00:00.428) 0:00:27.209 ********** 2026-03-02 00:44:19.760980 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116) 2026-03-02 00:44:19.760989 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116) 2026-03-02 00:44:19.760998 | orchestrator | 2026-03-02 00:44:19.761006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:19.761015 | orchestrator | Monday 02 March 2026 00:44:09 +0000 (0:00:00.430) 0:00:27.640 ********** 2026-03-02 00:44:19.761024 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077) 2026-03-02 00:44:19.761032 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077) 2026-03-02 00:44:19.761041 | orchestrator | 2026-03-02 00:44:19.761049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:19.761058 | orchestrator | Monday 02 March 2026 00:44:10 +0000 (0:00:00.414) 0:00:28.055 ********** 2026-03-02 00:44:19.761080 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842) 2026-03-02 00:44:19.761089 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842) 2026-03-02 00:44:19.761097 | orchestrator | 2026-03-02 00:44:19.761106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:19.761115 | orchestrator | Monday 02 March 2026 00:44:10 +0000 (0:00:00.638) 0:00:28.693 ********** 2026-03-02 00:44:19.761124 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-02 00:44:19.761132 | orchestrator | 2026-03-02 00:44:19.761141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761149 | orchestrator | Monday 02 March 2026 00:44:11 +0000 (0:00:00.583) 0:00:29.276 ********** 2026-03-02 00:44:19.761178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-02 00:44:19.761188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-02 00:44:19.761196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-02 00:44:19.761205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-02 00:44:19.761213 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-02 00:44:19.761222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-02 00:44:19.761230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-02 00:44:19.761239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-02 00:44:19.761247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-02 00:44:19.761256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-02 00:44:19.761291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-02 00:44:19.761301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-02 00:44:19.761311 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-02 00:44:19.761322 | orchestrator | 2026-03-02 00:44:19.761332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761342 | orchestrator | Monday 02 March 2026 00:44:12 +0000 (0:00:00.833) 0:00:30.110 ********** 2026-03-02 00:44:19.761352 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761363 | orchestrator | 2026-03-02 00:44:19.761373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761383 | orchestrator | Monday 02 March 2026 00:44:12 +0000 (0:00:00.187) 0:00:30.297 ********** 2026-03-02 00:44:19.761394 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761402 | orchestrator | 2026-03-02 00:44:19.761411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761420 | orchestrator | Monday 02 March 2026 00:44:12 +0000 (0:00:00.220) 0:00:30.518 ********** 2026-03-02 00:44:19.761428 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761437 | orchestrator | 2026-03-02 00:44:19.761460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761469 | orchestrator | Monday 02 March 2026 00:44:12 +0000 (0:00:00.211) 0:00:30.729 ********** 2026-03-02 00:44:19.761478 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761486 | orchestrator | 2026-03-02 00:44:19.761495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761503 | orchestrator | Monday 02 March 2026 00:44:13 +0000 (0:00:00.203) 0:00:30.933 ********** 2026-03-02 00:44:19.761512 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761520 | orchestrator | 2026-03-02 00:44:19.761529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761537 | orchestrator | Monday 02 March 2026 00:44:13 +0000 (0:00:00.185) 0:00:31.118 ********** 2026-03-02 00:44:19.761546 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761554 | orchestrator | 2026-03-02 00:44:19.761563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761571 | orchestrator | Monday 02 March 2026 00:44:13 +0000 (0:00:00.204) 0:00:31.323 ********** 2026-03-02 00:44:19.761579 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761588 | orchestrator | 2026-03-02 00:44:19.761596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761605 | orchestrator | Monday 02 March 2026 00:44:13 +0000 (0:00:00.209) 0:00:31.533 ********** 2026-03-02 00:44:19.761620 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761629 | orchestrator | 2026-03-02 00:44:19.761638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761646 | orchestrator | Monday 02 March 2026 00:44:13 +0000 (0:00:00.230) 0:00:31.763 ********** 2026-03-02 00:44:19.761655 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-02 00:44:19.761663 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-02 00:44:19.761672 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-02 00:44:19.761681 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-02 00:44:19.761689 | orchestrator | 2026-03-02 00:44:19.761698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761706 | orchestrator | Monday 02 March 2026 00:44:14 +0000 (0:00:00.852) 0:00:32.616 ********** 2026-03-02 00:44:19.761715 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761723 | orchestrator | 2026-03-02 00:44:19.761731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761740 | orchestrator | Monday 02 March 2026 00:44:15 +0000 (0:00:00.196) 0:00:32.812 ********** 2026-03-02 00:44:19.761749 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761757 | orchestrator | 2026-03-02 00:44:19.761766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761781 | orchestrator | Monday 02 March 2026 00:44:15 +0000 (0:00:00.778) 0:00:33.590 ********** 2026-03-02 00:44:19.761790 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761799 | orchestrator | 2026-03-02 00:44:19.761807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:19.761816 | orchestrator | Monday 02 March 2026 00:44:16 +0000 (0:00:00.206) 0:00:33.797 ********** 2026-03-02 00:44:19.761825 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761833 | orchestrator | 2026-03-02 00:44:19.761842 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-02 00:44:19.761850 | orchestrator | Monday 02 March 2026 00:44:16 +0000 (0:00:00.203) 0:00:34.001 ********** 2026-03-02 00:44:19.761859 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.761867 | orchestrator | 2026-03-02 00:44:19.761876 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-02 00:44:19.761884 | orchestrator | Monday 02 March 2026 00:44:16 +0000 (0:00:00.140) 0:00:34.141 ********** 2026-03-02 00:44:19.761893 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de3a51bd-019b-527a-8dea-ff4c94e5d801'}}) 2026-03-02 00:44:19.761902 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a84d633-ba5b-5049-b6da-2482ee8b3083'}}) 2026-03-02 00:44:19.761911 | orchestrator | 2026-03-02 00:44:19.761919 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-02 00:44:19.761928 | orchestrator | Monday 02 March 2026 00:44:16 +0000 (0:00:00.186) 0:00:34.327 ********** 2026-03-02 00:44:19.761937 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'}) 2026-03-02 00:44:19.761947 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'}) 2026-03-02 00:44:19.761956 | orchestrator | 2026-03-02 00:44:19.761964 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-02 00:44:19.761973 | orchestrator | Monday 02 March 2026 00:44:18 +0000 (0:00:01.818) 0:00:36.146 ********** 2026-03-02 00:44:19.761981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:19.761991 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:19.762005 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:19.762081 | orchestrator | 2026-03-02 00:44:19.762094 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-02 00:44:19.762103 | orchestrator | Monday 02 March 2026 00:44:18 +0000 (0:00:00.135) 0:00:36.282 ********** 2026-03-02 00:44:19.762112 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'}) 2026-03-02 00:44:19.762127 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'}) 2026-03-02 00:44:25.070706 | orchestrator | 2026-03-02 00:44:25.070843 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-02 00:44:25.070860 | orchestrator | Monday 02 March 2026 00:44:19 +0000 (0:00:01.336) 0:00:37.618 ********** 2026-03-02 00:44:25.070873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:25.070887 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:25.070899 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.070911 | orchestrator | 2026-03-02 00:44:25.070923 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-02 00:44:25.070934 | orchestrator | Monday 02 March 2026 00:44:19 +0000 (0:00:00.130) 0:00:37.749 ********** 2026-03-02 00:44:25.070944 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.070955 | orchestrator | 2026-03-02 00:44:25.070966 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-02 00:44:25.070976 | orchestrator | Monday 02 March 2026 00:44:20 +0000 (0:00:00.103) 0:00:37.853 ********** 2026-03-02 00:44:25.070996 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:25.071015 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:25.071034 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.071054 | orchestrator | 2026-03-02 00:44:25.071074 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-02 00:44:25.071095 | orchestrator | Monday 02 March 2026 00:44:20 +0000 (0:00:00.133) 0:00:37.986 ********** 2026-03-02 00:44:25.071115 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.071132 | orchestrator | 2026-03-02 00:44:25.071150 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-02 00:44:25.071194 | orchestrator | Monday 02 March 2026 00:44:20 +0000 (0:00:00.136) 0:00:38.122 ********** 2026-03-02 00:44:25.071213 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:25.071235 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:25.071256 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.071309 | orchestrator | 2026-03-02 00:44:25.071331 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-02 00:44:25.071352 | orchestrator | Monday 02 March 2026 00:44:20 +0000 (0:00:00.256) 0:00:38.379 ********** 2026-03-02 00:44:25.071373 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.071387 | orchestrator | 2026-03-02 00:44:25.071401 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-02 00:44:25.071414 | orchestrator | Monday 02 March 2026 00:44:20 +0000 (0:00:00.135) 0:00:38.515 ********** 2026-03-02 00:44:25.071425 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:25.071467 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:25.071479 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.071490 | orchestrator | 2026-03-02 00:44:25.071501 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-02 00:44:25.071511 | orchestrator | Monday 02 March 2026 00:44:20 +0000 (0:00:00.130) 0:00:38.646 ********** 2026-03-02 00:44:25.071523 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:44:25.071535 | orchestrator | 2026-03-02 00:44:25.071546 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-02 00:44:25.071561 | orchestrator | Monday 02 March 2026 00:44:21 +0000 (0:00:00.163) 0:00:38.810 ********** 2026-03-02 00:44:25.071579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:25.071599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:25.071619 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.071638 | orchestrator | 2026-03-02 00:44:25.071656 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-02 00:44:25.071673 | orchestrator | Monday 02 March 2026 00:44:21 +0000 (0:00:00.159) 0:00:38.969 ********** 2026-03-02 00:44:25.071691 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:25.071709 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:25.071729 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.071747 | orchestrator | 2026-03-02 00:44:25.071763 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-02 00:44:25.071796 | orchestrator | Monday 02 March 2026 00:44:21 +0000 (0:00:00.161) 0:00:39.131 ********** 2026-03-02 00:44:25.071808 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:25.071819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:25.071830 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.071841 | orchestrator | 2026-03-02 00:44:25.071852 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-02 00:44:25.071863 | orchestrator | Monday 02 March 2026 00:44:21 +0000 (0:00:00.165) 0:00:39.296 ********** 2026-03-02 00:44:25.071873 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.071885 | orchestrator | 2026-03-02 00:44:25.071895 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-02 00:44:25.071906 | orchestrator | Monday 02 March 2026 00:44:21 +0000 (0:00:00.141) 0:00:39.439 ********** 2026-03-02 00:44:25.071917 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.071927 | orchestrator | 2026-03-02 00:44:25.071938 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-02 00:44:25.071949 | orchestrator | Monday 02 March 2026 00:44:21 +0000 (0:00:00.132) 0:00:39.571 ********** 2026-03-02 00:44:25.071960 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.071970 | orchestrator | 2026-03-02 00:44:25.071981 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-02 00:44:25.071992 | orchestrator | Monday 02 March 2026 00:44:21 +0000 (0:00:00.158) 0:00:39.729 ********** 2026-03-02 00:44:25.072004 | orchestrator | ok: [testbed-node-4] => { 2026-03-02 00:44:25.072023 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-02 00:44:25.072056 | orchestrator | } 2026-03-02 00:44:25.072076 | orchestrator | 2026-03-02 00:44:25.072095 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-02 00:44:25.072114 | orchestrator | Monday 02 March 2026 00:44:22 +0000 (0:00:00.151) 0:00:39.880 ********** 2026-03-02 00:44:25.072132 | orchestrator | ok: [testbed-node-4] => { 2026-03-02 00:44:25.072155 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-02 00:44:25.072174 | orchestrator | } 2026-03-02 00:44:25.072185 | orchestrator | 2026-03-02 00:44:25.072203 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-02 00:44:25.072215 | orchestrator | Monday 02 March 2026 00:44:22 +0000 (0:00:00.136) 0:00:40.016 ********** 2026-03-02 00:44:25.072225 | orchestrator | ok: [testbed-node-4] => { 2026-03-02 00:44:25.072236 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-02 00:44:25.072247 | orchestrator | } 2026-03-02 00:44:25.072258 | orchestrator | 2026-03-02 00:44:25.072296 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-02 00:44:25.072307 | orchestrator | Monday 02 March 2026 00:44:22 +0000 (0:00:00.396) 0:00:40.413 ********** 2026-03-02 00:44:25.072318 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:44:25.072329 | orchestrator | 2026-03-02 00:44:25.072339 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-02 00:44:25.072350 | orchestrator | Monday 02 March 2026 00:44:23 +0000 (0:00:00.522) 0:00:40.936 ********** 2026-03-02 00:44:25.072361 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:44:25.072372 | orchestrator | 2026-03-02 00:44:25.072382 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-02 00:44:25.072393 | orchestrator | Monday 02 March 2026 00:44:23 +0000 (0:00:00.504) 0:00:41.440 ********** 2026-03-02 00:44:25.072404 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:44:25.072415 | orchestrator | 2026-03-02 00:44:25.072425 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-02 00:44:25.072436 | orchestrator | Monday 02 March 2026 00:44:24 +0000 (0:00:00.463) 0:00:41.903 ********** 2026-03-02 00:44:25.072447 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:44:25.072457 | orchestrator | 2026-03-02 00:44:25.072468 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-02 00:44:25.072479 | orchestrator | Monday 02 March 2026 00:44:24 +0000 (0:00:00.130) 0:00:42.034 ********** 2026-03-02 00:44:25.072489 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.072500 | orchestrator | 2026-03-02 00:44:25.072511 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-02 00:44:25.072521 | orchestrator | Monday 02 March 2026 00:44:24 +0000 (0:00:00.096) 0:00:42.130 ********** 2026-03-02 00:44:25.072532 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.072543 | orchestrator | 2026-03-02 00:44:25.072553 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-02 00:44:25.072564 | orchestrator | Monday 02 March 2026 00:44:24 +0000 (0:00:00.101) 0:00:42.232 ********** 2026-03-02 00:44:25.072579 | orchestrator | ok: [testbed-node-4] => { 2026-03-02 00:44:25.072597 | orchestrator |  "vgs_report": { 2026-03-02 00:44:25.072635 | orchestrator |  "vg": [] 2026-03-02 00:44:25.072669 | orchestrator |  } 2026-03-02 00:44:25.072688 | orchestrator | } 2026-03-02 00:44:25.072699 | orchestrator | 2026-03-02 00:44:25.072710 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-02 00:44:25.072721 | orchestrator | Monday 02 March 2026 00:44:24 +0000 (0:00:00.118) 0:00:42.351 ********** 2026-03-02 00:44:25.072732 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.072743 | orchestrator | 2026-03-02 00:44:25.072753 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-02 00:44:25.072764 | orchestrator | Monday 02 March 2026 00:44:24 +0000 (0:00:00.123) 0:00:42.474 ********** 2026-03-02 00:44:25.072775 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.072790 | orchestrator | 2026-03-02 00:44:25.072806 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-02 00:44:25.072837 | orchestrator | Monday 02 March 2026 00:44:24 +0000 (0:00:00.117) 0:00:42.592 ********** 2026-03-02 00:44:25.072856 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.072877 | orchestrator | 2026-03-02 00:44:25.072896 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-02 00:44:25.072914 | orchestrator | Monday 02 March 2026 00:44:24 +0000 (0:00:00.125) 0:00:42.717 ********** 2026-03-02 00:44:25.072934 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:25.072953 | orchestrator | 2026-03-02 00:44:25.072983 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-02 00:44:29.057448 | orchestrator | Monday 02 March 2026 00:44:25 +0000 (0:00:00.143) 0:00:42.861 ********** 2026-03-02 00:44:29.057532 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057541 | orchestrator | 2026-03-02 00:44:29.057549 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-02 00:44:29.057555 | orchestrator | Monday 02 March 2026 00:44:25 +0000 (0:00:00.272) 0:00:43.133 ********** 2026-03-02 00:44:29.057561 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057566 | orchestrator | 2026-03-02 00:44:29.057581 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-02 00:44:29.057587 | orchestrator | Monday 02 March 2026 00:44:25 +0000 (0:00:00.134) 0:00:43.268 ********** 2026-03-02 00:44:29.057592 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057598 | orchestrator | 2026-03-02 00:44:29.057603 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-02 00:44:29.057609 | orchestrator | Monday 02 March 2026 00:44:25 +0000 (0:00:00.120) 0:00:43.388 ********** 2026-03-02 00:44:29.057615 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057620 | orchestrator | 2026-03-02 00:44:29.057626 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-02 00:44:29.057631 | orchestrator | Monday 02 March 2026 00:44:25 +0000 (0:00:00.128) 0:00:43.516 ********** 2026-03-02 00:44:29.057636 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057642 | orchestrator | 2026-03-02 00:44:29.057647 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-02 00:44:29.057653 | orchestrator | Monday 02 March 2026 00:44:25 +0000 (0:00:00.127) 0:00:43.644 ********** 2026-03-02 00:44:29.057658 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057664 | orchestrator | 2026-03-02 00:44:29.057669 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-02 00:44:29.057675 | orchestrator | Monday 02 March 2026 00:44:25 +0000 (0:00:00.108) 0:00:43.752 ********** 2026-03-02 00:44:29.057680 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057685 | orchestrator | 2026-03-02 00:44:29.057691 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-02 00:44:29.057696 | orchestrator | Monday 02 March 2026 00:44:26 +0000 (0:00:00.103) 0:00:43.856 ********** 2026-03-02 00:44:29.057702 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057707 | orchestrator | 2026-03-02 00:44:29.057713 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-02 00:44:29.057718 | orchestrator | Monday 02 March 2026 00:44:26 +0000 (0:00:00.106) 0:00:43.962 ********** 2026-03-02 00:44:29.057724 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057730 | orchestrator | 2026-03-02 00:44:29.057735 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-02 00:44:29.057741 | orchestrator | Monday 02 March 2026 00:44:26 +0000 (0:00:00.110) 0:00:44.073 ********** 2026-03-02 00:44:29.057746 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057752 | orchestrator | 2026-03-02 00:44:29.057757 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-02 00:44:29.057763 | orchestrator | Monday 02 March 2026 00:44:26 +0000 (0:00:00.122) 0:00:44.195 ********** 2026-03-02 00:44:29.057769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:29.057808 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:29.057815 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057820 | orchestrator | 2026-03-02 00:44:29.057826 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-02 00:44:29.057831 | orchestrator | Monday 02 March 2026 00:44:26 +0000 (0:00:00.137) 0:00:44.332 ********** 2026-03-02 00:44:29.057836 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:29.057842 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:29.057847 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057853 | orchestrator | 2026-03-02 00:44:29.057858 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-02 00:44:29.057864 | orchestrator | Monday 02 March 2026 00:44:26 +0000 (0:00:00.138) 0:00:44.471 ********** 2026-03-02 00:44:29.057869 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:29.057875 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:29.057880 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057885 | orchestrator | 2026-03-02 00:44:29.057891 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-02 00:44:29.057900 | orchestrator | Monday 02 March 2026 00:44:26 +0000 (0:00:00.277) 0:00:44.749 ********** 2026-03-02 00:44:29.057909 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:29.057925 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:29.057934 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.057942 | orchestrator | 2026-03-02 00:44:29.057966 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-02 00:44:29.057975 | orchestrator | Monday 02 March 2026 00:44:27 +0000 (0:00:00.117) 0:00:44.866 ********** 2026-03-02 00:44:29.057983 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:29.057992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:29.058001 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.058010 | orchestrator | 2026-03-02 00:44:29.058078 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-02 00:44:29.058084 | orchestrator | Monday 02 March 2026 00:44:27 +0000 (0:00:00.129) 0:00:44.995 ********** 2026-03-02 00:44:29.058089 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:29.058095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:29.058101 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.058106 | orchestrator | 2026-03-02 00:44:29.058111 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-02 00:44:29.058116 | orchestrator | Monday 02 March 2026 00:44:27 +0000 (0:00:00.131) 0:00:45.128 ********** 2026-03-02 00:44:29.058122 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:29.058141 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:29.058146 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.058152 | orchestrator | 2026-03-02 00:44:29.058157 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-02 00:44:29.058162 | orchestrator | Monday 02 March 2026 00:44:27 +0000 (0:00:00.138) 0:00:45.266 ********** 2026-03-02 00:44:29.058168 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:29.058173 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:29.058178 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.058184 | orchestrator | 2026-03-02 00:44:29.058189 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-02 00:44:29.058194 | orchestrator | Monday 02 March 2026 00:44:27 +0000 (0:00:00.141) 0:00:45.408 ********** 2026-03-02 00:44:29.058200 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:44:29.058205 | orchestrator | 2026-03-02 00:44:29.058210 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-02 00:44:29.058216 | orchestrator | Monday 02 March 2026 00:44:28 +0000 (0:00:00.521) 0:00:45.930 ********** 2026-03-02 00:44:29.058221 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:44:29.058226 | orchestrator | 2026-03-02 00:44:29.058231 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-02 00:44:29.058237 | orchestrator | Monday 02 March 2026 00:44:28 +0000 (0:00:00.467) 0:00:46.397 ********** 2026-03-02 00:44:29.058242 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:44:29.058247 | orchestrator | 2026-03-02 00:44:29.058252 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-02 00:44:29.058258 | orchestrator | Monday 02 March 2026 00:44:28 +0000 (0:00:00.095) 0:00:46.493 ********** 2026-03-02 00:44:29.058263 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'vg_name': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'}) 2026-03-02 00:44:29.058310 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'vg_name': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'}) 2026-03-02 00:44:29.058316 | orchestrator | 2026-03-02 00:44:29.058321 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-02 00:44:29.058326 | orchestrator | Monday 02 March 2026 00:44:28 +0000 (0:00:00.145) 0:00:46.639 ********** 2026-03-02 00:44:29.058332 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:29.058337 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:29.058342 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:29.058347 | orchestrator | 2026-03-02 00:44:29.058353 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-02 00:44:29.058358 | orchestrator | Monday 02 March 2026 00:44:28 +0000 (0:00:00.141) 0:00:46.781 ********** 2026-03-02 00:44:29.058363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:29.058375 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:34.266995 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:34.267094 | orchestrator | 2026-03-02 00:44:34.267134 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-02 00:44:34.267148 | orchestrator | Monday 02 March 2026 00:44:29 +0000 (0:00:00.144) 0:00:46.926 ********** 2026-03-02 00:44:34.267159 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'})  2026-03-02 00:44:34.267172 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'})  2026-03-02 00:44:34.267183 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:44:34.267194 | orchestrator | 2026-03-02 00:44:34.267205 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-02 00:44:34.267216 | orchestrator | Monday 02 March 2026 00:44:29 +0000 (0:00:00.150) 0:00:47.076 ********** 2026-03-02 00:44:34.267227 | orchestrator | ok: [testbed-node-4] => { 2026-03-02 00:44:34.267237 | orchestrator |  "lvm_report": { 2026-03-02 00:44:34.267249 | orchestrator |  "lv": [ 2026-03-02 00:44:34.267260 | orchestrator |  { 2026-03-02 00:44:34.267319 | orchestrator |  "lv_name": "osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083", 2026-03-02 00:44:34.267332 | orchestrator |  "vg_name": "ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083" 2026-03-02 00:44:34.267343 | orchestrator |  }, 2026-03-02 00:44:34.267353 | orchestrator |  { 2026-03-02 00:44:34.267364 | orchestrator |  "lv_name": "osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801", 2026-03-02 00:44:34.267375 | orchestrator |  "vg_name": "ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801" 2026-03-02 00:44:34.267386 | orchestrator |  } 2026-03-02 00:44:34.267396 | orchestrator |  ], 2026-03-02 00:44:34.267407 | orchestrator |  "pv": [ 2026-03-02 00:44:34.267417 | orchestrator |  { 2026-03-02 00:44:34.267428 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-02 00:44:34.267452 | orchestrator |  "vg_name": "ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801" 2026-03-02 00:44:34.267464 | orchestrator |  }, 2026-03-02 00:44:34.267474 | orchestrator |  { 2026-03-02 00:44:34.267485 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-02 00:44:34.267495 | orchestrator |  "vg_name": "ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083" 2026-03-02 00:44:34.267506 | orchestrator |  } 2026-03-02 00:44:34.267517 | orchestrator |  ] 2026-03-02 00:44:34.267529 | orchestrator |  } 2026-03-02 00:44:34.267542 | orchestrator | } 2026-03-02 00:44:34.267556 | orchestrator | 2026-03-02 00:44:34.267568 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-02 00:44:34.267581 | orchestrator | 2026-03-02 00:44:34.267593 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-02 00:44:34.267606 | orchestrator | Monday 02 March 2026 00:44:29 +0000 (0:00:00.379) 0:00:47.455 ********** 2026-03-02 00:44:34.267619 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-02 00:44:34.267631 | orchestrator | 2026-03-02 00:44:34.267644 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-02 00:44:34.267657 | orchestrator | Monday 02 March 2026 00:44:29 +0000 (0:00:00.238) 0:00:47.693 ********** 2026-03-02 00:44:34.267670 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:44:34.267683 | orchestrator | 2026-03-02 00:44:34.267695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.267708 | orchestrator | Monday 02 March 2026 00:44:30 +0000 (0:00:00.202) 0:00:47.896 ********** 2026-03-02 00:44:34.267720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-02 00:44:34.267732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-02 00:44:34.267745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-02 00:44:34.267756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-02 00:44:34.267779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-02 00:44:34.267792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-02 00:44:34.267804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-02 00:44:34.267816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-02 00:44:34.267829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-02 00:44:34.267847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-02 00:44:34.267859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-02 00:44:34.267872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-02 00:44:34.267884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-02 00:44:34.267898 | orchestrator | 2026-03-02 00:44:34.267910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.267921 | orchestrator | Monday 02 March 2026 00:44:30 +0000 (0:00:00.371) 0:00:48.267 ********** 2026-03-02 00:44:34.267931 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:34.267942 | orchestrator | 2026-03-02 00:44:34.267953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.267964 | orchestrator | Monday 02 March 2026 00:44:30 +0000 (0:00:00.162) 0:00:48.430 ********** 2026-03-02 00:44:34.267974 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:34.267985 | orchestrator | 2026-03-02 00:44:34.267996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.268023 | orchestrator | Monday 02 March 2026 00:44:30 +0000 (0:00:00.196) 0:00:48.627 ********** 2026-03-02 00:44:34.268034 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:34.268045 | orchestrator | 2026-03-02 00:44:34.268056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.268067 | orchestrator | Monday 02 March 2026 00:44:31 +0000 (0:00:00.174) 0:00:48.802 ********** 2026-03-02 00:44:34.268077 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:34.268088 | orchestrator | 2026-03-02 00:44:34.268099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.268110 | orchestrator | Monday 02 March 2026 00:44:31 +0000 (0:00:00.178) 0:00:48.980 ********** 2026-03-02 00:44:34.268120 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:34.268131 | orchestrator | 2026-03-02 00:44:34.268142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.268152 | orchestrator | Monday 02 March 2026 00:44:31 +0000 (0:00:00.433) 0:00:49.414 ********** 2026-03-02 00:44:34.268163 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:34.268174 | orchestrator | 2026-03-02 00:44:34.268185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.268196 | orchestrator | Monday 02 March 2026 00:44:31 +0000 (0:00:00.159) 0:00:49.574 ********** 2026-03-02 00:44:34.268206 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:34.268217 | orchestrator | 2026-03-02 00:44:34.268228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.268239 | orchestrator | Monday 02 March 2026 00:44:31 +0000 (0:00:00.168) 0:00:49.742 ********** 2026-03-02 00:44:34.268250 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:34.268260 | orchestrator | 2026-03-02 00:44:34.268292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.268304 | orchestrator | Monday 02 March 2026 00:44:32 +0000 (0:00:00.171) 0:00:49.914 ********** 2026-03-02 00:44:34.268315 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8) 2026-03-02 00:44:34.268326 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8) 2026-03-02 00:44:34.268344 | orchestrator | 2026-03-02 00:44:34.268354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.268365 | orchestrator | Monday 02 March 2026 00:44:32 +0000 (0:00:00.387) 0:00:50.301 ********** 2026-03-02 00:44:34.268375 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351) 2026-03-02 00:44:34.268386 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351) 2026-03-02 00:44:34.268397 | orchestrator | 2026-03-02 00:44:34.268407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.268418 | orchestrator | Monday 02 March 2026 00:44:32 +0000 (0:00:00.389) 0:00:50.690 ********** 2026-03-02 00:44:34.268429 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506) 2026-03-02 00:44:34.268439 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506) 2026-03-02 00:44:34.268450 | orchestrator | 2026-03-02 00:44:34.268461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.268471 | orchestrator | Monday 02 March 2026 00:44:33 +0000 (0:00:00.354) 0:00:51.045 ********** 2026-03-02 00:44:34.268482 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb) 2026-03-02 00:44:34.268493 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb) 2026-03-02 00:44:34.268504 | orchestrator | 2026-03-02 00:44:34.268514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-02 00:44:34.268525 | orchestrator | Monday 02 March 2026 00:44:33 +0000 (0:00:00.411) 0:00:51.457 ********** 2026-03-02 00:44:34.268536 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-02 00:44:34.268546 | orchestrator | 2026-03-02 00:44:34.268557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:34.268567 | orchestrator | Monday 02 March 2026 00:44:33 +0000 (0:00:00.295) 0:00:51.753 ********** 2026-03-02 00:44:34.268578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-02 00:44:34.268588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-02 00:44:34.268599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-02 00:44:34.268610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-02 00:44:34.268621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-02 00:44:34.268631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-02 00:44:34.268642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-02 00:44:34.268652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-02 00:44:34.268663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-02 00:44:34.268673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-02 00:44:34.268684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-02 00:44:34.268701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-02 00:44:43.464383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-02 00:44:43.464463 | orchestrator | 2026-03-02 00:44:43.464471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464477 | orchestrator | Monday 02 March 2026 00:44:34 +0000 (0:00:00.382) 0:00:52.135 ********** 2026-03-02 00:44:43.464498 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464503 | orchestrator | 2026-03-02 00:44:43.464508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464512 | orchestrator | Monday 02 March 2026 00:44:34 +0000 (0:00:00.187) 0:00:52.323 ********** 2026-03-02 00:44:43.464516 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464520 | orchestrator | 2026-03-02 00:44:43.464557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464562 | orchestrator | Monday 02 March 2026 00:44:35 +0000 (0:00:00.671) 0:00:52.994 ********** 2026-03-02 00:44:43.464566 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464570 | orchestrator | 2026-03-02 00:44:43.464575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464579 | orchestrator | Monday 02 March 2026 00:44:35 +0000 (0:00:00.209) 0:00:53.204 ********** 2026-03-02 00:44:43.464583 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464587 | orchestrator | 2026-03-02 00:44:43.464591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464595 | orchestrator | Monday 02 March 2026 00:44:35 +0000 (0:00:00.205) 0:00:53.410 ********** 2026-03-02 00:44:43.464599 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464603 | orchestrator | 2026-03-02 00:44:43.464607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464611 | orchestrator | Monday 02 March 2026 00:44:35 +0000 (0:00:00.207) 0:00:53.618 ********** 2026-03-02 00:44:43.464615 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464619 | orchestrator | 2026-03-02 00:44:43.464626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464630 | orchestrator | Monday 02 March 2026 00:44:36 +0000 (0:00:00.214) 0:00:53.832 ********** 2026-03-02 00:44:43.464634 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464638 | orchestrator | 2026-03-02 00:44:43.464642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464646 | orchestrator | Monday 02 March 2026 00:44:36 +0000 (0:00:00.242) 0:00:54.075 ********** 2026-03-02 00:44:43.464650 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464654 | orchestrator | 2026-03-02 00:44:43.464658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464662 | orchestrator | Monday 02 March 2026 00:44:36 +0000 (0:00:00.199) 0:00:54.274 ********** 2026-03-02 00:44:43.464666 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-02 00:44:43.464671 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-02 00:44:43.464676 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-02 00:44:43.464680 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-02 00:44:43.464684 | orchestrator | 2026-03-02 00:44:43.464688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464692 | orchestrator | Monday 02 March 2026 00:44:37 +0000 (0:00:00.694) 0:00:54.969 ********** 2026-03-02 00:44:43.464696 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464700 | orchestrator | 2026-03-02 00:44:43.464704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464708 | orchestrator | Monday 02 March 2026 00:44:37 +0000 (0:00:00.197) 0:00:55.166 ********** 2026-03-02 00:44:43.464712 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464716 | orchestrator | 2026-03-02 00:44:43.464720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464724 | orchestrator | Monday 02 March 2026 00:44:37 +0000 (0:00:00.189) 0:00:55.356 ********** 2026-03-02 00:44:43.464728 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464732 | orchestrator | 2026-03-02 00:44:43.464736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-02 00:44:43.464740 | orchestrator | Monday 02 March 2026 00:44:37 +0000 (0:00:00.190) 0:00:55.546 ********** 2026-03-02 00:44:43.464748 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464752 | orchestrator | 2026-03-02 00:44:43.464756 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-02 00:44:43.464760 | orchestrator | Monday 02 March 2026 00:44:37 +0000 (0:00:00.189) 0:00:55.735 ********** 2026-03-02 00:44:43.464764 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464768 | orchestrator | 2026-03-02 00:44:43.464772 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-02 00:44:43.464776 | orchestrator | Monday 02 March 2026 00:44:38 +0000 (0:00:00.334) 0:00:56.070 ********** 2026-03-02 00:44:43.464780 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c1d64d47-37ed-5019-b7d5-718691437d08'}}) 2026-03-02 00:44:43.464785 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3d7235d6-f117-525f-ba2d-9ab371851486'}}) 2026-03-02 00:44:43.464789 | orchestrator | 2026-03-02 00:44:43.464793 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-02 00:44:43.464797 | orchestrator | Monday 02 March 2026 00:44:38 +0000 (0:00:00.193) 0:00:56.264 ********** 2026-03-02 00:44:43.464802 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'}) 2026-03-02 00:44:43.464807 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'}) 2026-03-02 00:44:43.464811 | orchestrator | 2026-03-02 00:44:43.464815 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-02 00:44:43.464829 | orchestrator | Monday 02 March 2026 00:44:40 +0000 (0:00:01.859) 0:00:58.123 ********** 2026-03-02 00:44:43.464844 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:43.464850 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:43.464854 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464858 | orchestrator | 2026-03-02 00:44:43.464862 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-02 00:44:43.464866 | orchestrator | Monday 02 March 2026 00:44:40 +0000 (0:00:00.156) 0:00:58.279 ********** 2026-03-02 00:44:43.464870 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'}) 2026-03-02 00:44:43.464874 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'}) 2026-03-02 00:44:43.464878 | orchestrator | 2026-03-02 00:44:43.464882 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-02 00:44:43.464886 | orchestrator | Monday 02 March 2026 00:44:41 +0000 (0:00:01.368) 0:00:59.647 ********** 2026-03-02 00:44:43.464890 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:43.464895 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:43.464902 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464907 | orchestrator | 2026-03-02 00:44:43.464912 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-02 00:44:43.464917 | orchestrator | Monday 02 March 2026 00:44:42 +0000 (0:00:00.158) 0:00:59.806 ********** 2026-03-02 00:44:43.464921 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464926 | orchestrator | 2026-03-02 00:44:43.464931 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-02 00:44:43.464936 | orchestrator | Monday 02 March 2026 00:44:42 +0000 (0:00:00.118) 0:00:59.924 ********** 2026-03-02 00:44:43.464944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:43.464949 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:43.464954 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464959 | orchestrator | 2026-03-02 00:44:43.464963 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-02 00:44:43.464968 | orchestrator | Monday 02 March 2026 00:44:42 +0000 (0:00:00.158) 0:01:00.083 ********** 2026-03-02 00:44:43.464973 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.464978 | orchestrator | 2026-03-02 00:44:43.464982 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-02 00:44:43.464987 | orchestrator | Monday 02 March 2026 00:44:42 +0000 (0:00:00.128) 0:01:00.211 ********** 2026-03-02 00:44:43.464991 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:43.464996 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:43.465001 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.465005 | orchestrator | 2026-03-02 00:44:43.465010 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-02 00:44:43.465015 | orchestrator | Monday 02 March 2026 00:44:42 +0000 (0:00:00.145) 0:01:00.357 ********** 2026-03-02 00:44:43.465019 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.465024 | orchestrator | 2026-03-02 00:44:43.465029 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-02 00:44:43.465033 | orchestrator | Monday 02 March 2026 00:44:42 +0000 (0:00:00.123) 0:01:00.480 ********** 2026-03-02 00:44:43.465038 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:43.465042 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:43.465047 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:43.465052 | orchestrator | 2026-03-02 00:44:43.465057 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-02 00:44:43.465061 | orchestrator | Monday 02 March 2026 00:44:42 +0000 (0:00:00.182) 0:01:00.663 ********** 2026-03-02 00:44:43.465066 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:44:43.465071 | orchestrator | 2026-03-02 00:44:43.465075 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-02 00:44:43.465080 | orchestrator | Monday 02 March 2026 00:44:43 +0000 (0:00:00.517) 0:01:01.181 ********** 2026-03-02 00:44:43.465088 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:49.198845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:49.198953 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.198969 | orchestrator | 2026-03-02 00:44:49.198981 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-02 00:44:49.198994 | orchestrator | Monday 02 March 2026 00:44:43 +0000 (0:00:00.176) 0:01:01.357 ********** 2026-03-02 00:44:49.199005 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:49.199016 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:49.199049 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.199060 | orchestrator | 2026-03-02 00:44:49.199072 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-02 00:44:49.199083 | orchestrator | Monday 02 March 2026 00:44:43 +0000 (0:00:00.168) 0:01:01.526 ********** 2026-03-02 00:44:49.199094 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:49.199104 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:49.199115 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.199126 | orchestrator | 2026-03-02 00:44:49.199136 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-02 00:44:49.199160 | orchestrator | Monday 02 March 2026 00:44:43 +0000 (0:00:00.157) 0:01:01.683 ********** 2026-03-02 00:44:49.199171 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.199182 | orchestrator | 2026-03-02 00:44:49.199193 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-02 00:44:49.199204 | orchestrator | Monday 02 March 2026 00:44:44 +0000 (0:00:00.144) 0:01:01.827 ********** 2026-03-02 00:44:49.199214 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.199225 | orchestrator | 2026-03-02 00:44:49.199236 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-02 00:44:49.199246 | orchestrator | Monday 02 March 2026 00:44:44 +0000 (0:00:00.151) 0:01:01.979 ********** 2026-03-02 00:44:49.199257 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.199268 | orchestrator | 2026-03-02 00:44:49.199308 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-02 00:44:49.199321 | orchestrator | Monday 02 March 2026 00:44:44 +0000 (0:00:00.138) 0:01:02.118 ********** 2026-03-02 00:44:49.199332 | orchestrator | ok: [testbed-node-5] => { 2026-03-02 00:44:49.199343 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-02 00:44:49.199354 | orchestrator | } 2026-03-02 00:44:49.199366 | orchestrator | 2026-03-02 00:44:49.199380 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-02 00:44:49.199392 | orchestrator | Monday 02 March 2026 00:44:44 +0000 (0:00:00.131) 0:01:02.250 ********** 2026-03-02 00:44:49.199404 | orchestrator | ok: [testbed-node-5] => { 2026-03-02 00:44:49.199417 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-02 00:44:49.199429 | orchestrator | } 2026-03-02 00:44:49.199442 | orchestrator | 2026-03-02 00:44:49.199454 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-02 00:44:49.199466 | orchestrator | Monday 02 March 2026 00:44:44 +0000 (0:00:00.142) 0:01:02.392 ********** 2026-03-02 00:44:49.199478 | orchestrator | ok: [testbed-node-5] => { 2026-03-02 00:44:49.199491 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-02 00:44:49.199504 | orchestrator | } 2026-03-02 00:44:49.199516 | orchestrator | 2026-03-02 00:44:49.199529 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-02 00:44:49.199541 | orchestrator | Monday 02 March 2026 00:44:44 +0000 (0:00:00.141) 0:01:02.534 ********** 2026-03-02 00:44:49.199553 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:44:49.199565 | orchestrator | 2026-03-02 00:44:49.199578 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-02 00:44:49.199590 | orchestrator | Monday 02 March 2026 00:44:45 +0000 (0:00:00.551) 0:01:03.085 ********** 2026-03-02 00:44:49.199602 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:44:49.199615 | orchestrator | 2026-03-02 00:44:49.199627 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-02 00:44:49.199639 | orchestrator | Monday 02 March 2026 00:44:45 +0000 (0:00:00.511) 0:01:03.597 ********** 2026-03-02 00:44:49.199652 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:44:49.199675 | orchestrator | 2026-03-02 00:44:49.199688 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-02 00:44:49.199700 | orchestrator | Monday 02 March 2026 00:44:46 +0000 (0:00:00.667) 0:01:04.265 ********** 2026-03-02 00:44:49.199713 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:44:49.199725 | orchestrator | 2026-03-02 00:44:49.199739 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-02 00:44:49.199750 | orchestrator | Monday 02 March 2026 00:44:46 +0000 (0:00:00.126) 0:01:04.391 ********** 2026-03-02 00:44:49.199760 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.199771 | orchestrator | 2026-03-02 00:44:49.199782 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-02 00:44:49.199792 | orchestrator | Monday 02 March 2026 00:44:46 +0000 (0:00:00.091) 0:01:04.483 ********** 2026-03-02 00:44:49.199803 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.199813 | orchestrator | 2026-03-02 00:44:49.199824 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-02 00:44:49.199834 | orchestrator | Monday 02 March 2026 00:44:46 +0000 (0:00:00.101) 0:01:04.585 ********** 2026-03-02 00:44:49.199845 | orchestrator | ok: [testbed-node-5] => { 2026-03-02 00:44:49.199856 | orchestrator |  "vgs_report": { 2026-03-02 00:44:49.199867 | orchestrator |  "vg": [] 2026-03-02 00:44:49.199894 | orchestrator |  } 2026-03-02 00:44:49.199905 | orchestrator | } 2026-03-02 00:44:49.199916 | orchestrator | 2026-03-02 00:44:49.199927 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-02 00:44:49.199938 | orchestrator | Monday 02 March 2026 00:44:46 +0000 (0:00:00.122) 0:01:04.708 ********** 2026-03-02 00:44:49.199948 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.199959 | orchestrator | 2026-03-02 00:44:49.199969 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-02 00:44:49.199980 | orchestrator | Monday 02 March 2026 00:44:47 +0000 (0:00:00.132) 0:01:04.840 ********** 2026-03-02 00:44:49.199991 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200002 | orchestrator | 2026-03-02 00:44:49.200012 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-02 00:44:49.200030 | orchestrator | Monday 02 March 2026 00:44:47 +0000 (0:00:00.122) 0:01:04.963 ********** 2026-03-02 00:44:49.200048 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200065 | orchestrator | 2026-03-02 00:44:49.200093 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-02 00:44:49.200112 | orchestrator | Monday 02 March 2026 00:44:47 +0000 (0:00:00.148) 0:01:05.112 ********** 2026-03-02 00:44:49.200129 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200148 | orchestrator | 2026-03-02 00:44:49.200165 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-02 00:44:49.200184 | orchestrator | Monday 02 March 2026 00:44:47 +0000 (0:00:00.127) 0:01:05.240 ********** 2026-03-02 00:44:49.200202 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200220 | orchestrator | 2026-03-02 00:44:49.200231 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-02 00:44:49.200241 | orchestrator | Monday 02 March 2026 00:44:47 +0000 (0:00:00.122) 0:01:05.362 ********** 2026-03-02 00:44:49.200252 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200262 | orchestrator | 2026-03-02 00:44:49.200273 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-02 00:44:49.200316 | orchestrator | Monday 02 March 2026 00:44:47 +0000 (0:00:00.126) 0:01:05.489 ********** 2026-03-02 00:44:49.200327 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200338 | orchestrator | 2026-03-02 00:44:49.200349 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-02 00:44:49.200359 | orchestrator | Monday 02 March 2026 00:44:47 +0000 (0:00:00.125) 0:01:05.615 ********** 2026-03-02 00:44:49.200370 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200380 | orchestrator | 2026-03-02 00:44:49.200391 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-02 00:44:49.200413 | orchestrator | Monday 02 March 2026 00:44:48 +0000 (0:00:00.273) 0:01:05.888 ********** 2026-03-02 00:44:49.200424 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200434 | orchestrator | 2026-03-02 00:44:49.200445 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-02 00:44:49.200456 | orchestrator | Monday 02 March 2026 00:44:48 +0000 (0:00:00.128) 0:01:06.017 ********** 2026-03-02 00:44:49.200466 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200477 | orchestrator | 2026-03-02 00:44:49.200488 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-02 00:44:49.200499 | orchestrator | Monday 02 March 2026 00:44:48 +0000 (0:00:00.127) 0:01:06.144 ********** 2026-03-02 00:44:49.200509 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200520 | orchestrator | 2026-03-02 00:44:49.200531 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-02 00:44:49.200541 | orchestrator | Monday 02 March 2026 00:44:48 +0000 (0:00:00.129) 0:01:06.273 ********** 2026-03-02 00:44:49.200551 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200562 | orchestrator | 2026-03-02 00:44:49.200573 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-02 00:44:49.200583 | orchestrator | Monday 02 March 2026 00:44:48 +0000 (0:00:00.138) 0:01:06.411 ********** 2026-03-02 00:44:49.200594 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200604 | orchestrator | 2026-03-02 00:44:49.200615 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-02 00:44:49.200626 | orchestrator | Monday 02 March 2026 00:44:48 +0000 (0:00:00.126) 0:01:06.538 ********** 2026-03-02 00:44:49.200636 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200647 | orchestrator | 2026-03-02 00:44:49.200657 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-02 00:44:49.200668 | orchestrator | Monday 02 March 2026 00:44:48 +0000 (0:00:00.123) 0:01:06.662 ********** 2026-03-02 00:44:49.200679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:49.200690 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:49.200701 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200712 | orchestrator | 2026-03-02 00:44:49.200722 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-02 00:44:49.200733 | orchestrator | Monday 02 March 2026 00:44:49 +0000 (0:00:00.134) 0:01:06.797 ********** 2026-03-02 00:44:49.200744 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:49.200754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:49.200765 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:49.200776 | orchestrator | 2026-03-02 00:44:49.200787 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-02 00:44:49.200797 | orchestrator | Monday 02 March 2026 00:44:49 +0000 (0:00:00.136) 0:01:06.934 ********** 2026-03-02 00:44:49.200818 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:52.015049 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:52.015200 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:52.015232 | orchestrator | 2026-03-02 00:44:52.015253 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-02 00:44:52.015273 | orchestrator | Monday 02 March 2026 00:44:49 +0000 (0:00:00.142) 0:01:07.077 ********** 2026-03-02 00:44:52.015374 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:52.015388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:52.015399 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:52.015410 | orchestrator | 2026-03-02 00:44:52.015421 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-02 00:44:52.015432 | orchestrator | Monday 02 March 2026 00:44:49 +0000 (0:00:00.132) 0:01:07.209 ********** 2026-03-02 00:44:52.015443 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:52.015461 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:52.015472 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:52.015483 | orchestrator | 2026-03-02 00:44:52.015494 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-02 00:44:52.015504 | orchestrator | Monday 02 March 2026 00:44:49 +0000 (0:00:00.136) 0:01:07.346 ********** 2026-03-02 00:44:52.015515 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:52.015526 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:52.015536 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:52.015547 | orchestrator | 2026-03-02 00:44:52.015559 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-02 00:44:52.015572 | orchestrator | Monday 02 March 2026 00:44:49 +0000 (0:00:00.251) 0:01:07.597 ********** 2026-03-02 00:44:52.015585 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:52.015598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:52.015612 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:52.015624 | orchestrator | 2026-03-02 00:44:52.015637 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-02 00:44:52.015650 | orchestrator | Monday 02 March 2026 00:44:49 +0000 (0:00:00.134) 0:01:07.732 ********** 2026-03-02 00:44:52.015663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:52.015676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:52.015688 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:52.015700 | orchestrator | 2026-03-02 00:44:52.015711 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-02 00:44:52.015722 | orchestrator | Monday 02 March 2026 00:44:50 +0000 (0:00:00.125) 0:01:07.857 ********** 2026-03-02 00:44:52.015733 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:44:52.015745 | orchestrator | 2026-03-02 00:44:52.015755 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-02 00:44:52.015766 | orchestrator | Monday 02 March 2026 00:44:50 +0000 (0:00:00.495) 0:01:08.352 ********** 2026-03-02 00:44:52.015777 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:44:52.015787 | orchestrator | 2026-03-02 00:44:52.015798 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-02 00:44:52.015815 | orchestrator | Monday 02 March 2026 00:44:51 +0000 (0:00:00.524) 0:01:08.877 ********** 2026-03-02 00:44:52.015826 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:44:52.015837 | orchestrator | 2026-03-02 00:44:52.015847 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-02 00:44:52.015858 | orchestrator | Monday 02 March 2026 00:44:51 +0000 (0:00:00.124) 0:01:09.002 ********** 2026-03-02 00:44:52.015869 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'vg_name': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'}) 2026-03-02 00:44:52.015880 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'vg_name': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'}) 2026-03-02 00:44:52.015891 | orchestrator | 2026-03-02 00:44:52.015902 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-02 00:44:52.015913 | orchestrator | Monday 02 March 2026 00:44:51 +0000 (0:00:00.159) 0:01:09.162 ********** 2026-03-02 00:44:52.015943 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:52.015955 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:52.015966 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:52.015976 | orchestrator | 2026-03-02 00:44:52.015987 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-02 00:44:52.015998 | orchestrator | Monday 02 March 2026 00:44:51 +0000 (0:00:00.156) 0:01:09.319 ********** 2026-03-02 00:44:52.016009 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:52.016019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:52.016030 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:52.016041 | orchestrator | 2026-03-02 00:44:52.016051 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-02 00:44:52.016062 | orchestrator | Monday 02 March 2026 00:44:51 +0000 (0:00:00.147) 0:01:09.467 ********** 2026-03-02 00:44:52.016073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'})  2026-03-02 00:44:52.016089 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'})  2026-03-02 00:44:52.016100 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:44:52.016111 | orchestrator | 2026-03-02 00:44:52.016122 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-02 00:44:52.016132 | orchestrator | Monday 02 March 2026 00:44:51 +0000 (0:00:00.168) 0:01:09.635 ********** 2026-03-02 00:44:52.016143 | orchestrator | ok: [testbed-node-5] => { 2026-03-02 00:44:52.016154 | orchestrator |  "lvm_report": { 2026-03-02 00:44:52.016165 | orchestrator |  "lv": [ 2026-03-02 00:44:52.016175 | orchestrator |  { 2026-03-02 00:44:52.016186 | orchestrator |  "lv_name": "osd-block-3d7235d6-f117-525f-ba2d-9ab371851486", 2026-03-02 00:44:52.016197 | orchestrator |  "vg_name": "ceph-3d7235d6-f117-525f-ba2d-9ab371851486" 2026-03-02 00:44:52.016208 | orchestrator |  }, 2026-03-02 00:44:52.016219 | orchestrator |  { 2026-03-02 00:44:52.016230 | orchestrator |  "lv_name": "osd-block-c1d64d47-37ed-5019-b7d5-718691437d08", 2026-03-02 00:44:52.016241 | orchestrator |  "vg_name": "ceph-c1d64d47-37ed-5019-b7d5-718691437d08" 2026-03-02 00:44:52.016251 | orchestrator |  } 2026-03-02 00:44:52.016262 | orchestrator |  ], 2026-03-02 00:44:52.016273 | orchestrator |  "pv": [ 2026-03-02 00:44:52.016311 | orchestrator |  { 2026-03-02 00:44:52.016322 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-02 00:44:52.016333 | orchestrator |  "vg_name": "ceph-c1d64d47-37ed-5019-b7d5-718691437d08" 2026-03-02 00:44:52.016344 | orchestrator |  }, 2026-03-02 00:44:52.016354 | orchestrator |  { 2026-03-02 00:44:52.016365 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-02 00:44:52.016388 | orchestrator |  "vg_name": "ceph-3d7235d6-f117-525f-ba2d-9ab371851486" 2026-03-02 00:44:52.016399 | orchestrator |  } 2026-03-02 00:44:52.016410 | orchestrator |  ] 2026-03-02 00:44:52.016420 | orchestrator |  } 2026-03-02 00:44:52.016431 | orchestrator | } 2026-03-02 00:44:52.016442 | orchestrator | 2026-03-02 00:44:52.016453 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:44:52.016464 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-02 00:44:52.016475 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-02 00:44:52.016486 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-02 00:44:52.016496 | orchestrator | 2026-03-02 00:44:52.016507 | orchestrator | 2026-03-02 00:44:52.016518 | orchestrator | 2026-03-02 00:44:52.016528 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:44:52.016539 | orchestrator | Monday 02 March 2026 00:44:51 +0000 (0:00:00.150) 0:01:09.785 ********** 2026-03-02 00:44:52.016550 | orchestrator | =============================================================================== 2026-03-02 00:44:52.016561 | orchestrator | Create block VGs -------------------------------------------------------- 5.70s 2026-03-02 00:44:52.016571 | orchestrator | Create block LVs -------------------------------------------------------- 4.11s 2026-03-02 00:44:52.016582 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.73s 2026-03-02 00:44:52.016593 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.67s 2026-03-02 00:44:52.016604 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.61s 2026-03-02 00:44:52.016614 | orchestrator | Add known partitions to the list of available block devices ------------- 1.59s 2026-03-02 00:44:52.016625 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2026-03-02 00:44:52.016636 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2026-03-02 00:44:52.016654 | orchestrator | Add known links to the list of available block devices ------------------ 1.36s 2026-03-02 00:44:52.399146 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2026-03-02 00:44:52.399255 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-03-02 00:44:52.399269 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-03-02 00:44:52.399327 | orchestrator | Prepare variables for OSD count check ----------------------------------- 0.80s 2026-03-02 00:44:52.399339 | orchestrator | Print LVM report data --------------------------------------------------- 0.80s 2026-03-02 00:44:52.399349 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-03-02 00:44:52.399359 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2026-03-02 00:44:52.399369 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2026-03-02 00:44:52.399379 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-03-02 00:44:52.399389 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-02 00:44:52.399398 | orchestrator | Print number of OSDs wanted per DB+WAL VG ------------------------------- 0.68s 2026-03-02 00:45:04.525860 | orchestrator | 2026-03-02 00:45:04 | INFO  | Prepare task for execution of facts. 2026-03-02 00:45:04.588480 | orchestrator | 2026-03-02 00:45:04 | INFO  | Task 38666bb7-ce28-4dbd-9593-da8863a357a9 (facts) was prepared for execution. 2026-03-02 00:45:04.588622 | orchestrator | 2026-03-02 00:45:04 | INFO  | It takes a moment until task 38666bb7-ce28-4dbd-9593-da8863a357a9 (facts) has been started and output is visible here. 2026-03-02 00:45:17.781371 | orchestrator | 2026-03-02 00:45:17.781484 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-02 00:45:17.781501 | orchestrator | 2026-03-02 00:45:17.781515 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-02 00:45:17.781526 | orchestrator | Monday 02 March 2026 00:45:08 +0000 (0:00:00.202) 0:00:00.202 ********** 2026-03-02 00:45:17.781538 | orchestrator | ok: [testbed-manager] 2026-03-02 00:45:17.781549 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:45:17.781560 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:45:17.781572 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:45:17.781583 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:45:17.781594 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:45:17.781605 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:45:17.781616 | orchestrator | 2026-03-02 00:45:17.781627 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-02 00:45:17.781638 | orchestrator | Monday 02 March 2026 00:45:09 +0000 (0:00:00.854) 0:00:01.057 ********** 2026-03-02 00:45:17.781649 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:45:17.781660 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:45:17.781671 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:45:17.781681 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:45:17.781692 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:45:17.781703 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:45:17.781714 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:45:17.781725 | orchestrator | 2026-03-02 00:45:17.781736 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-02 00:45:17.781747 | orchestrator | 2026-03-02 00:45:17.781758 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-02 00:45:17.781769 | orchestrator | Monday 02 March 2026 00:45:10 +0000 (0:00:01.041) 0:00:02.099 ********** 2026-03-02 00:45:17.781780 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:45:17.781790 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:45:17.781807 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:45:17.781825 | orchestrator | ok: [testbed-manager] 2026-03-02 00:45:17.781845 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:45:17.781865 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:45:17.781884 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:45:17.781903 | orchestrator | 2026-03-02 00:45:17.781923 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-02 00:45:17.781942 | orchestrator | 2026-03-02 00:45:17.781961 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-02 00:45:17.781980 | orchestrator | Monday 02 March 2026 00:45:17 +0000 (0:00:06.738) 0:00:08.838 ********** 2026-03-02 00:45:17.782001 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:45:17.782091 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:45:17.782116 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:45:17.782135 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:45:17.782153 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:45:17.782173 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:45:17.782191 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:45:17.782210 | orchestrator | 2026-03-02 00:45:17.782228 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:45:17.782246 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:45:17.782259 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:45:17.782340 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:45:17.782352 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:45:17.782363 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:45:17.782374 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:45:17.782384 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:45:17.782395 | orchestrator | 2026-03-02 00:45:17.782405 | orchestrator | 2026-03-02 00:45:17.782416 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:45:17.782427 | orchestrator | Monday 02 March 2026 00:45:17 +0000 (0:00:00.473) 0:00:09.311 ********** 2026-03-02 00:45:17.782438 | orchestrator | =============================================================================== 2026-03-02 00:45:17.782449 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.74s 2026-03-02 00:45:17.782460 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2026-03-02 00:45:17.782470 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.85s 2026-03-02 00:45:17.782481 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2026-03-02 00:45:29.761780 | orchestrator | 2026-03-02 00:45:29 | INFO  | Prepare task for execution of frr. 2026-03-02 00:45:29.834082 | orchestrator | 2026-03-02 00:45:29 | INFO  | Task 6b20a25c-1e72-459e-9bf7-3ff3ce722be4 (frr) was prepared for execution. 2026-03-02 00:45:29.834192 | orchestrator | 2026-03-02 00:45:29 | INFO  | It takes a moment until task 6b20a25c-1e72-459e-9bf7-3ff3ce722be4 (frr) has been started and output is visible here. 2026-03-02 00:45:52.397828 | orchestrator | 2026-03-02 00:45:52.397922 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-02 00:45:52.397933 | orchestrator | 2026-03-02 00:45:52.397939 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-02 00:45:52.397945 | orchestrator | Monday 02 March 2026 00:45:33 +0000 (0:00:00.206) 0:00:00.206 ********** 2026-03-02 00:45:52.397951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-02 00:45:52.397958 | orchestrator | 2026-03-02 00:45:52.397963 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-02 00:45:52.397969 | orchestrator | Monday 02 March 2026 00:45:33 +0000 (0:00:00.197) 0:00:00.404 ********** 2026-03-02 00:45:52.397974 | orchestrator | changed: [testbed-manager] 2026-03-02 00:45:52.397980 | orchestrator | 2026-03-02 00:45:52.397986 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-02 00:45:52.397991 | orchestrator | Monday 02 March 2026 00:45:34 +0000 (0:00:01.030) 0:00:01.434 ********** 2026-03-02 00:45:52.397996 | orchestrator | changed: [testbed-manager] 2026-03-02 00:45:52.398001 | orchestrator | 2026-03-02 00:45:52.398007 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-02 00:45:52.398012 | orchestrator | Monday 02 March 2026 00:45:43 +0000 (0:00:08.259) 0:00:09.694 ********** 2026-03-02 00:45:52.398067 | orchestrator | ok: [testbed-manager] 2026-03-02 00:45:52.398074 | orchestrator | 2026-03-02 00:45:52.398079 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-02 00:45:52.398085 | orchestrator | Monday 02 March 2026 00:45:44 +0000 (0:00:00.885) 0:00:10.579 ********** 2026-03-02 00:45:52.398090 | orchestrator | changed: [testbed-manager] 2026-03-02 00:45:52.398114 | orchestrator | 2026-03-02 00:45:52.398120 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-02 00:45:52.398125 | orchestrator | Monday 02 March 2026 00:45:44 +0000 (0:00:00.909) 0:00:11.489 ********** 2026-03-02 00:45:52.398131 | orchestrator | ok: [testbed-manager] 2026-03-02 00:45:52.398136 | orchestrator | 2026-03-02 00:45:52.398142 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-02 00:45:52.398147 | orchestrator | Monday 02 March 2026 00:45:46 +0000 (0:00:01.081) 0:00:12.570 ********** 2026-03-02 00:45:52.398152 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:45:52.398157 | orchestrator | 2026-03-02 00:45:52.398163 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-02 00:45:52.398168 | orchestrator | Monday 02 March 2026 00:45:46 +0000 (0:00:00.141) 0:00:12.711 ********** 2026-03-02 00:45:52.398173 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:45:52.398179 | orchestrator | 2026-03-02 00:45:52.398184 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-02 00:45:52.398189 | orchestrator | Monday 02 March 2026 00:45:46 +0000 (0:00:00.137) 0:00:12.849 ********** 2026-03-02 00:45:52.398194 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:45:52.398200 | orchestrator | 2026-03-02 00:45:52.398205 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-02 00:45:52.398211 | orchestrator | Monday 02 March 2026 00:45:46 +0000 (0:00:00.145) 0:00:12.995 ********** 2026-03-02 00:45:52.398216 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:45:52.398221 | orchestrator | 2026-03-02 00:45:52.398227 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-02 00:45:52.398232 | orchestrator | Monday 02 March 2026 00:45:46 +0000 (0:00:00.123) 0:00:13.119 ********** 2026-03-02 00:45:52.398237 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:45:52.398243 | orchestrator | 2026-03-02 00:45:52.398248 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-02 00:45:52.398253 | orchestrator | Monday 02 March 2026 00:45:46 +0000 (0:00:00.144) 0:00:13.263 ********** 2026-03-02 00:45:52.398259 | orchestrator | changed: [testbed-manager] 2026-03-02 00:45:52.398264 | orchestrator | 2026-03-02 00:45:52.398269 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-02 00:45:52.398274 | orchestrator | Monday 02 March 2026 00:45:47 +0000 (0:00:01.009) 0:00:14.273 ********** 2026-03-02 00:45:52.398280 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-02 00:45:52.398285 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-02 00:45:52.398292 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-02 00:45:52.398297 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-02 00:45:52.398303 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-02 00:45:52.398308 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-02 00:45:52.398314 | orchestrator | 2026-03-02 00:45:52.398378 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-02 00:45:52.398384 | orchestrator | Monday 02 March 2026 00:45:49 +0000 (0:00:02.010) 0:00:16.284 ********** 2026-03-02 00:45:52.398391 | orchestrator | ok: [testbed-manager] 2026-03-02 00:45:52.398398 | orchestrator | 2026-03-02 00:45:52.398404 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-02 00:45:52.398411 | orchestrator | Monday 02 March 2026 00:45:50 +0000 (0:00:01.130) 0:00:17.414 ********** 2026-03-02 00:45:52.398417 | orchestrator | changed: [testbed-manager] 2026-03-02 00:45:52.398424 | orchestrator | 2026-03-02 00:45:52.398430 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:45:52.398442 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-02 00:45:52.398448 | orchestrator | 2026-03-02 00:45:52.398455 | orchestrator | 2026-03-02 00:45:52.398473 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:45:52.398480 | orchestrator | Monday 02 March 2026 00:45:52 +0000 (0:00:01.307) 0:00:18.722 ********** 2026-03-02 00:45:52.398486 | orchestrator | =============================================================================== 2026-03-02 00:45:52.398493 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.26s 2026-03-02 00:45:52.398499 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.01s 2026-03-02 00:45:52.398506 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.31s 2026-03-02 00:45:52.398512 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.13s 2026-03-02 00:45:52.398518 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.08s 2026-03-02 00:45:52.398525 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.03s 2026-03-02 00:45:52.398531 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.01s 2026-03-02 00:45:52.398537 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2026-03-02 00:45:52.398543 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.89s 2026-03-02 00:45:52.398550 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-03-02 00:45:52.398556 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-03-02 00:45:52.398562 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-03-02 00:45:52.398568 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.14s 2026-03-02 00:45:52.398574 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.14s 2026-03-02 00:45:52.398580 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.12s 2026-03-02 00:45:52.597105 | orchestrator | 2026-03-02 00:45:52.600559 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Mar 2 00:45:52 UTC 2026 2026-03-02 00:45:52.600613 | orchestrator | 2026-03-02 00:45:54.297424 | orchestrator | 2026-03-02 00:45:54 | INFO  | Collection nutshell is prepared for execution 2026-03-02 00:45:54.297530 | orchestrator | 2026-03-02 00:45:54 | INFO  | A [0] - dotfiles 2026-03-02 00:46:04.350961 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [0] - homer 2026-03-02 00:46:04.351055 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [0] - netdata 2026-03-02 00:46:04.351264 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [0] - openstackclient 2026-03-02 00:46:04.351411 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [0] - phpmyadmin 2026-03-02 00:46:04.351717 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [0] - common 2026-03-02 00:46:04.355367 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [1] -- loadbalancer 2026-03-02 00:46:04.355491 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [2] --- opensearch 2026-03-02 00:46:04.355812 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [2] --- mariadb-ng 2026-03-02 00:46:04.355984 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [3] ---- horizon 2026-03-02 00:46:04.356009 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [3] ---- keystone 2026-03-02 00:46:04.356554 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [4] ----- neutron 2026-03-02 00:46:04.356586 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [5] ------ wait-for-nova 2026-03-02 00:46:04.356675 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [6] ------- octavia 2026-03-02 00:46:04.358230 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [4] ----- barbican 2026-03-02 00:46:04.358479 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [4] ----- designate 2026-03-02 00:46:04.358506 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [4] ----- ironic 2026-03-02 00:46:04.358515 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [4] ----- placement 2026-03-02 00:46:04.358522 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [4] ----- magnum 2026-03-02 00:46:04.359192 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [1] -- openvswitch 2026-03-02 00:46:04.359292 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [2] --- ovn 2026-03-02 00:46:04.359741 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [1] -- memcached 2026-03-02 00:46:04.359949 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [1] -- redis 2026-03-02 00:46:04.359970 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [1] -- rabbitmq-ng 2026-03-02 00:46:04.360290 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [0] - kubernetes 2026-03-02 00:46:04.362547 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [1] -- kubeconfig 2026-03-02 00:46:04.362598 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [1] -- copy-kubeconfig 2026-03-02 00:46:04.362897 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [0] - ceph 2026-03-02 00:46:04.364697 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [1] -- ceph-pools 2026-03-02 00:46:04.364890 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [2] --- copy-ceph-keys 2026-03-02 00:46:04.365228 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [3] ---- cephclient 2026-03-02 00:46:04.365251 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-02 00:46:04.365278 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [4] ----- wait-for-keystone 2026-03-02 00:46:04.365509 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-02 00:46:04.365527 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [5] ------ glance 2026-03-02 00:46:04.365754 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [5] ------ cinder 2026-03-02 00:46:04.365811 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [5] ------ nova 2026-03-02 00:46:04.367528 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [4] ----- prometheus 2026-03-02 00:46:04.367564 | orchestrator | 2026-03-02 00:46:04 | INFO  | A [5] ------ grafana 2026-03-02 00:46:04.592659 | orchestrator | 2026-03-02 00:46:04 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-02 00:46:04.592738 | orchestrator | 2026-03-02 00:46:04 | INFO  | Tasks are running in the background 2026-03-02 00:46:07.359063 | orchestrator | 2026-03-02 00:46:07 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-02 00:46:09.472161 | orchestrator | 2026-03-02 00:46:09 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:09.472298 | orchestrator | 2026-03-02 00:46:09 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:09.472904 | orchestrator | 2026-03-02 00:46:09 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:09.476563 | orchestrator | 2026-03-02 00:46:09 | INFO  | Task 7660803d-92d3-46fb-be3d-77dc654387bd is in state STARTED 2026-03-02 00:46:09.476911 | orchestrator | 2026-03-02 00:46:09 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:09.477631 | orchestrator | 2026-03-02 00:46:09 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:09.478069 | orchestrator | 2026-03-02 00:46:09 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:09.478722 | orchestrator | 2026-03-02 00:46:09 | INFO  | Task 0b6a50f0-07d5-46e7-b61e-eeb85979d8f2 is in state SUCCESS 2026-03-02 00:46:09.479129 | orchestrator | 2026-03-02 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:12.528296 | orchestrator | 2026-03-02 00:46:12 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:12.528802 | orchestrator | 2026-03-02 00:46:12 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:12.529048 | orchestrator | 2026-03-02 00:46:12 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:12.529499 | orchestrator | 2026-03-02 00:46:12 | INFO  | Task 7660803d-92d3-46fb-be3d-77dc654387bd is in state STARTED 2026-03-02 00:46:12.532116 | orchestrator | 2026-03-02 00:46:12 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:12.532177 | orchestrator | 2026-03-02 00:46:12 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:12.533740 | orchestrator | 2026-03-02 00:46:12 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:12.533777 | orchestrator | 2026-03-02 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:15.573233 | orchestrator | 2026-03-02 00:46:15 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:15.573381 | orchestrator | 2026-03-02 00:46:15 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:15.573857 | orchestrator | 2026-03-02 00:46:15 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:15.574470 | orchestrator | 2026-03-02 00:46:15 | INFO  | Task 7660803d-92d3-46fb-be3d-77dc654387bd is in state STARTED 2026-03-02 00:46:15.575066 | orchestrator | 2026-03-02 00:46:15 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:15.578359 | orchestrator | 2026-03-02 00:46:15 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:15.578689 | orchestrator | 2026-03-02 00:46:15 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:15.578727 | orchestrator | 2026-03-02 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:18.685698 | orchestrator | 2026-03-02 00:46:18 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:18.687902 | orchestrator | 2026-03-02 00:46:18 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:18.692833 | orchestrator | 2026-03-02 00:46:18 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:18.702833 | orchestrator | 2026-03-02 00:46:18 | INFO  | Task 7660803d-92d3-46fb-be3d-77dc654387bd is in state STARTED 2026-03-02 00:46:18.705979 | orchestrator | 2026-03-02 00:46:18 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:18.708639 | orchestrator | 2026-03-02 00:46:18 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:18.713446 | orchestrator | 2026-03-02 00:46:18 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:18.713500 | orchestrator | 2026-03-02 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:21.899473 | orchestrator | 2026-03-02 00:46:21 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:21.899649 | orchestrator | 2026-03-02 00:46:21 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:21.900304 | orchestrator | 2026-03-02 00:46:21 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:21.905719 | orchestrator | 2026-03-02 00:46:21 | INFO  | Task 7660803d-92d3-46fb-be3d-77dc654387bd is in state STARTED 2026-03-02 00:46:21.905800 | orchestrator | 2026-03-02 00:46:21 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:21.907017 | orchestrator | 2026-03-02 00:46:21 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:21.907041 | orchestrator | 2026-03-02 00:46:21 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:21.907048 | orchestrator | 2026-03-02 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:25.161761 | orchestrator | 2026-03-02 00:46:25 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:25.162204 | orchestrator | 2026-03-02 00:46:25 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:25.163192 | orchestrator | 2026-03-02 00:46:25 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:25.163968 | orchestrator | 2026-03-02 00:46:25 | INFO  | Task 7660803d-92d3-46fb-be3d-77dc654387bd is in state STARTED 2026-03-02 00:46:25.164798 | orchestrator | 2026-03-02 00:46:25 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:25.165366 | orchestrator | 2026-03-02 00:46:25 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:25.166145 | orchestrator | 2026-03-02 00:46:25 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:25.166263 | orchestrator | 2026-03-02 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:28.214170 | orchestrator | 2026-03-02 00:46:28 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:28.214971 | orchestrator | 2026-03-02 00:46:28 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:28.216656 | orchestrator | 2026-03-02 00:46:28 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:28.218129 | orchestrator | 2026-03-02 00:46:28 | INFO  | Task 7660803d-92d3-46fb-be3d-77dc654387bd is in state STARTED 2026-03-02 00:46:28.219636 | orchestrator | 2026-03-02 00:46:28 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:28.220056 | orchestrator | 2026-03-02 00:46:28 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:28.220816 | orchestrator | 2026-03-02 00:46:28 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:28.220843 | orchestrator | 2026-03-02 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:31.309331 | orchestrator | 2026-03-02 00:46:31 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:31.309417 | orchestrator | 2026-03-02 00:46:31 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:31.309426 | orchestrator | 2026-03-02 00:46:31 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:31.309434 | orchestrator | 2026-03-02 00:46:31 | INFO  | Task 7660803d-92d3-46fb-be3d-77dc654387bd is in state SUCCESS 2026-03-02 00:46:31.309917 | orchestrator | 2026-03-02 00:46:31.309938 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:46:31.309946 | orchestrator | 2026-03-02 00:46:31.309953 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 00:46:31.309964 | orchestrator | Monday 02 March 2026 00:43:57 +0000 (0:00:00.242) 0:00:00.242 ********** 2026-03-02 00:46:31.309980 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:46:31.309987 | orchestrator | 2026-03-02 00:46:31.309993 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 00:46:31.310000 | orchestrator | Monday 02 March 2026 00:43:57 +0000 (0:00:00.120) 0:00:00.362 ********** 2026-03-02 00:46:31.310006 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-02 00:46:31.310060 | orchestrator | 2026-03-02 00:46:31.310068 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-02 00:46:31.310074 | orchestrator | 2026-03-02 00:46:31.310081 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-02 00:46:31.310087 | orchestrator | Monday 02 March 2026 00:43:57 +0000 (0:00:00.140) 0:00:00.503 ********** 2026-03-02 00:46:31.310093 | orchestrator | included: /ansible/roles/opensearch/tasks/pull.yml for testbed-node-0 2026-03-02 00:46:31.310099 | orchestrator | 2026-03-02 00:46:31.310105 | orchestrator | TASK [service-images-pull : opensearch | Pull images] ************************** 2026-03-02 00:46:31.310111 | orchestrator | Monday 02 March 2026 00:43:57 +0000 (0:00:00.175) 0:00:00.678 ********** 2026-03-02 00:46:31.310118 | orchestrator | changed: [testbed-node-0] => (item=opensearch) 2026-03-02 00:46:31.310124 | orchestrator | changed: [testbed-node-0] => (item=opensearch-dashboards) 2026-03-02 00:46:31.310144 | orchestrator | 2026-03-02 00:46:31.310150 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:46:31.310157 | orchestrator | testbed-node-0 : ok=4  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:46:31.310170 | orchestrator | 2026-03-02 00:46:31.310176 | orchestrator | 2026-03-02 00:46:31.310182 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:46:31.310188 | orchestrator | Monday 02 March 2026 00:46:07 +0000 (0:02:10.416) 0:02:11.094 ********** 2026-03-02 00:46:31.310195 | orchestrator | =============================================================================== 2026-03-02 00:46:31.310201 | orchestrator | service-images-pull : opensearch | Pull images ------------------------ 130.42s 2026-03-02 00:46:31.310207 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.18s 2026-03-02 00:46:31.310214 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.14s 2026-03-02 00:46:31.310220 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.12s 2026-03-02 00:46:31.310226 | orchestrator | 2026-03-02 00:46:31.310232 | orchestrator | 2026-03-02 00:46:31.310238 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-02 00:46:31.310245 | orchestrator | 2026-03-02 00:46:31.310251 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-02 00:46:31.310257 | orchestrator | Monday 02 March 2026 00:46:16 +0000 (0:00:00.763) 0:00:00.763 ********** 2026-03-02 00:46:31.310263 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:46:31.310269 | orchestrator | changed: [testbed-manager] 2026-03-02 00:46:31.310275 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:46:31.310282 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:46:31.310288 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:46:31.310294 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:46:31.310300 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:46:31.310306 | orchestrator | 2026-03-02 00:46:31.310313 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-02 00:46:31.310319 | orchestrator | Monday 02 March 2026 00:46:20 +0000 (0:00:04.687) 0:00:05.451 ********** 2026-03-02 00:46:31.310325 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-02 00:46:31.310332 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-02 00:46:31.310404 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-02 00:46:31.310412 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-02 00:46:31.310418 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-02 00:46:31.310439 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-02 00:46:31.310447 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-02 00:46:31.310453 | orchestrator | 2026-03-02 00:46:31.310459 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-02 00:46:31.310466 | orchestrator | Monday 02 March 2026 00:46:22 +0000 (0:00:01.967) 0:00:07.418 ********** 2026-03-02 00:46:31.310475 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-02 00:46:21.630609', 'end': '2026-03-02 00:46:21.634071', 'delta': '0:00:00.003462', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-02 00:46:31.310498 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-02 00:46:21.800454', 'end': '2026-03-02 00:46:21.809532', 'delta': '0:00:00.009078', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-02 00:46:31.310506 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-02 00:46:21.719593', 'end': '2026-03-02 00:46:21.725485', 'delta': '0:00:00.005892', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-02 00:46:31.310512 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-02 00:46:22.094241', 'end': '2026-03-02 00:46:22.102066', 'delta': '0:00:00.007825', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-02 00:46:31.310519 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-02 00:46:22.487766', 'end': '2026-03-02 00:46:22.497972', 'delta': '0:00:00.010206', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-02 00:46:31.310530 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-02 00:46:22.408187', 'end': '2026-03-02 00:46:22.416299', 'delta': '0:00:00.008112', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-02 00:46:31.310541 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-02 00:46:22.446066', 'end': '2026-03-02 00:46:22.453951', 'delta': '0:00:00.007885', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-02 00:46:31.310548 | orchestrator | 2026-03-02 00:46:31.310555 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-02 00:46:31.310562 | orchestrator | Monday 02 March 2026 00:46:25 +0000 (0:00:02.547) 0:00:09.966 ********** 2026-03-02 00:46:31.310568 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-02 00:46:31.310574 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-02 00:46:31.310581 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-02 00:46:31.310587 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-02 00:46:31.310593 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-02 00:46:31.310599 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-02 00:46:31.310605 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-02 00:46:31.310611 | orchestrator | 2026-03-02 00:46:31.310617 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-02 00:46:31.310623 | orchestrator | Monday 02 March 2026 00:46:26 +0000 (0:00:01.336) 0:00:11.302 ********** 2026-03-02 00:46:31.310629 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-02 00:46:31.310634 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-02 00:46:31.310640 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-02 00:46:31.310648 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-02 00:46:31.310654 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-02 00:46:31.310660 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-02 00:46:31.310667 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-02 00:46:31.310674 | orchestrator | 2026-03-02 00:46:31.310680 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:46:31.310852 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:46:31.310862 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:46:31.310868 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:46:31.310875 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:46:31.310881 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:46:31.310888 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:46:31.310894 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:46:31.310900 | orchestrator | 2026-03-02 00:46:31.310907 | orchestrator | 2026-03-02 00:46:31.310913 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:46:31.310919 | orchestrator | Monday 02 March 2026 00:46:29 +0000 (0:00:02.573) 0:00:13.876 ********** 2026-03-02 00:46:31.310924 | orchestrator | =============================================================================== 2026-03-02 00:46:31.310930 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.69s 2026-03-02 00:46:31.310937 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.57s 2026-03-02 00:46:31.310943 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.55s 2026-03-02 00:46:31.310949 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.97s 2026-03-02 00:46:31.310955 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.34s 2026-03-02 00:46:31.311363 | orchestrator | 2026-03-02 00:46:31 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:31.314477 | orchestrator | 2026-03-02 00:46:31 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:31.317309 | orchestrator | 2026-03-02 00:46:31 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:31.320469 | orchestrator | 2026-03-02 00:46:31 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:46:31.320518 | orchestrator | 2026-03-02 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:34.530481 | orchestrator | 2026-03-02 00:46:34 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:34.530540 | orchestrator | 2026-03-02 00:46:34 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:34.530549 | orchestrator | 2026-03-02 00:46:34 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:34.530556 | orchestrator | 2026-03-02 00:46:34 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:34.530563 | orchestrator | 2026-03-02 00:46:34 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:34.530569 | orchestrator | 2026-03-02 00:46:34 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:34.530573 | orchestrator | 2026-03-02 00:46:34 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:46:34.530577 | orchestrator | 2026-03-02 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:37.543556 | orchestrator | 2026-03-02 00:46:37 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:37.543656 | orchestrator | 2026-03-02 00:46:37 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:37.544288 | orchestrator | 2026-03-02 00:46:37 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:37.544910 | orchestrator | 2026-03-02 00:46:37 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:37.545332 | orchestrator | 2026-03-02 00:46:37 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:37.546272 | orchestrator | 2026-03-02 00:46:37 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:37.546440 | orchestrator | 2026-03-02 00:46:37 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:46:37.546543 | orchestrator | 2026-03-02 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:40.590463 | orchestrator | 2026-03-02 00:46:40 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:40.590533 | orchestrator | 2026-03-02 00:46:40 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:40.590539 | orchestrator | 2026-03-02 00:46:40 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:40.590543 | orchestrator | 2026-03-02 00:46:40 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:40.590547 | orchestrator | 2026-03-02 00:46:40 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:40.590551 | orchestrator | 2026-03-02 00:46:40 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:40.590555 | orchestrator | 2026-03-02 00:46:40 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:46:40.590560 | orchestrator | 2026-03-02 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:43.611773 | orchestrator | 2026-03-02 00:46:43 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:43.612115 | orchestrator | 2026-03-02 00:46:43 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:43.612830 | orchestrator | 2026-03-02 00:46:43 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:43.613473 | orchestrator | 2026-03-02 00:46:43 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:43.614181 | orchestrator | 2026-03-02 00:46:43 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:43.614846 | orchestrator | 2026-03-02 00:46:43 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:43.615681 | orchestrator | 2026-03-02 00:46:43 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:46:43.615705 | orchestrator | 2026-03-02 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:46.713669 | orchestrator | 2026-03-02 00:46:46 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:46.714948 | orchestrator | 2026-03-02 00:46:46 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:46.716141 | orchestrator | 2026-03-02 00:46:46 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:46.718475 | orchestrator | 2026-03-02 00:46:46 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:46.723251 | orchestrator | 2026-03-02 00:46:46 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:46.724659 | orchestrator | 2026-03-02 00:46:46 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:46.727252 | orchestrator | 2026-03-02 00:46:46 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:46:46.727801 | orchestrator | 2026-03-02 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:49.932592 | orchestrator | 2026-03-02 00:46:49 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:49.932653 | orchestrator | 2026-03-02 00:46:49 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:49.932662 | orchestrator | 2026-03-02 00:46:49 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:49.932669 | orchestrator | 2026-03-02 00:46:49 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:49.932675 | orchestrator | 2026-03-02 00:46:49 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:49.932692 | orchestrator | 2026-03-02 00:46:49 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:49.932700 | orchestrator | 2026-03-02 00:46:49 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:46:49.932707 | orchestrator | 2026-03-02 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:53.026763 | orchestrator | 2026-03-02 00:46:53 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:53.027741 | orchestrator | 2026-03-02 00:46:53 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:53.155181 | orchestrator | 2026-03-02 00:46:53 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:53.155235 | orchestrator | 2026-03-02 00:46:53 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state STARTED 2026-03-02 00:46:53.155244 | orchestrator | 2026-03-02 00:46:53 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:53.155251 | orchestrator | 2026-03-02 00:46:53 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:53.155260 | orchestrator | 2026-03-02 00:46:53 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:46:53.155268 | orchestrator | 2026-03-02 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:56.092679 | orchestrator | 2026-03-02 00:46:56 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:56.105822 | orchestrator | 2026-03-02 00:46:56 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:56.115750 | orchestrator | 2026-03-02 00:46:56 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:56.115841 | orchestrator | 2026-03-02 00:46:56 | INFO  | Task 687f7e6f-ee7d-435f-821d-4c7c2b3ccd77 is in state SUCCESS 2026-03-02 00:46:56.116284 | orchestrator | 2026-03-02 00:46:56 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:56.117155 | orchestrator | 2026-03-02 00:46:56 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:56.118405 | orchestrator | 2026-03-02 00:46:56 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:46:56.118461 | orchestrator | 2026-03-02 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:46:59.151902 | orchestrator | 2026-03-02 00:46:59 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:46:59.151981 | orchestrator | 2026-03-02 00:46:59 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:46:59.163024 | orchestrator | 2026-03-02 00:46:59 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:46:59.163120 | orchestrator | 2026-03-02 00:46:59 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:46:59.163138 | orchestrator | 2026-03-02 00:46:59 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:46:59.163155 | orchestrator | 2026-03-02 00:46:59 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:46:59.163173 | orchestrator | 2026-03-02 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:02.199969 | orchestrator | 2026-03-02 00:47:02 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:02.200074 | orchestrator | 2026-03-02 00:47:02 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state STARTED 2026-03-02 00:47:02.201438 | orchestrator | 2026-03-02 00:47:02 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:02.201860 | orchestrator | 2026-03-02 00:47:02 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:02.202817 | orchestrator | 2026-03-02 00:47:02 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:02.204539 | orchestrator | 2026-03-02 00:47:02 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:02.204570 | orchestrator | 2026-03-02 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:05.237222 | orchestrator | 2026-03-02 00:47:05 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:05.237430 | orchestrator | 2026-03-02 00:47:05 | INFO  | Task aa69484b-05e9-495d-9804-685b26e0bef1 is in state SUCCESS 2026-03-02 00:47:05.238871 | orchestrator | 2026-03-02 00:47:05 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:05.239717 | orchestrator | 2026-03-02 00:47:05 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:05.242222 | orchestrator | 2026-03-02 00:47:05 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:05.242634 | orchestrator | 2026-03-02 00:47:05 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:05.242713 | orchestrator | 2026-03-02 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:08.313488 | orchestrator | 2026-03-02 00:47:08 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:08.313569 | orchestrator | 2026-03-02 00:47:08 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:08.313577 | orchestrator | 2026-03-02 00:47:08 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:08.313583 | orchestrator | 2026-03-02 00:47:08 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:08.313588 | orchestrator | 2026-03-02 00:47:08 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:08.313593 | orchestrator | 2026-03-02 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:11.383152 | orchestrator | 2026-03-02 00:47:11 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:11.387586 | orchestrator | 2026-03-02 00:47:11 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:11.391756 | orchestrator | 2026-03-02 00:47:11 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:11.396089 | orchestrator | 2026-03-02 00:47:11 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:11.398872 | orchestrator | 2026-03-02 00:47:11 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:11.402342 | orchestrator | 2026-03-02 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:14.453808 | orchestrator | 2026-03-02 00:47:14 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:14.456153 | orchestrator | 2026-03-02 00:47:14 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:14.456632 | orchestrator | 2026-03-02 00:47:14 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:14.462935 | orchestrator | 2026-03-02 00:47:14 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:14.463019 | orchestrator | 2026-03-02 00:47:14 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:14.463029 | orchestrator | 2026-03-02 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:17.511741 | orchestrator | 2026-03-02 00:47:17 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:17.512039 | orchestrator | 2026-03-02 00:47:17 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:17.514447 | orchestrator | 2026-03-02 00:47:17 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:17.517765 | orchestrator | 2026-03-02 00:47:17 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:17.520697 | orchestrator | 2026-03-02 00:47:17 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:17.521266 | orchestrator | 2026-03-02 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:20.559761 | orchestrator | 2026-03-02 00:47:20 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:20.559933 | orchestrator | 2026-03-02 00:47:20 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:20.560981 | orchestrator | 2026-03-02 00:47:20 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:20.562408 | orchestrator | 2026-03-02 00:47:20 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:20.562831 | orchestrator | 2026-03-02 00:47:20 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:20.562867 | orchestrator | 2026-03-02 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:23.596559 | orchestrator | 2026-03-02 00:47:23 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:23.598126 | orchestrator | 2026-03-02 00:47:23 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:23.599528 | orchestrator | 2026-03-02 00:47:23 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:23.601012 | orchestrator | 2026-03-02 00:47:23 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:23.603462 | orchestrator | 2026-03-02 00:47:23 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:23.603565 | orchestrator | 2026-03-02 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:26.647576 | orchestrator | 2026-03-02 00:47:26 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:26.648669 | orchestrator | 2026-03-02 00:47:26 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:26.650695 | orchestrator | 2026-03-02 00:47:26 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:26.651135 | orchestrator | 2026-03-02 00:47:26 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:26.652235 | orchestrator | 2026-03-02 00:47:26 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:26.652260 | orchestrator | 2026-03-02 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:29.685897 | orchestrator | 2026-03-02 00:47:29 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:29.685979 | orchestrator | 2026-03-02 00:47:29 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:29.686256 | orchestrator | 2026-03-02 00:47:29 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:29.688178 | orchestrator | 2026-03-02 00:47:29 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:29.688618 | orchestrator | 2026-03-02 00:47:29 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:29.688941 | orchestrator | 2026-03-02 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:32.723103 | orchestrator | 2026-03-02 00:47:32 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:32.724172 | orchestrator | 2026-03-02 00:47:32 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:32.724919 | orchestrator | 2026-03-02 00:47:32 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:32.725696 | orchestrator | 2026-03-02 00:47:32 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:32.726754 | orchestrator | 2026-03-02 00:47:32 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:32.726804 | orchestrator | 2026-03-02 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:35.768679 | orchestrator | 2026-03-02 00:47:35 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:35.771883 | orchestrator | 2026-03-02 00:47:35 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:35.772567 | orchestrator | 2026-03-02 00:47:35 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:35.774222 | orchestrator | 2026-03-02 00:47:35 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:35.775228 | orchestrator | 2026-03-02 00:47:35 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:35.776588 | orchestrator | 2026-03-02 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:38.846214 | orchestrator | 2026-03-02 00:47:38 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:38.849577 | orchestrator | 2026-03-02 00:47:38 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:38.851102 | orchestrator | 2026-03-02 00:47:38 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:38.851687 | orchestrator | 2026-03-02 00:47:38 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:38.853501 | orchestrator | 2026-03-02 00:47:38 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:38.853549 | orchestrator | 2026-03-02 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:41.888337 | orchestrator | 2026-03-02 00:47:41 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:41.890758 | orchestrator | 2026-03-02 00:47:41 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:41.891880 | orchestrator | 2026-03-02 00:47:41 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:41.893654 | orchestrator | 2026-03-02 00:47:41 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:41.895252 | orchestrator | 2026-03-02 00:47:41 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state STARTED 2026-03-02 00:47:41.895280 | orchestrator | 2026-03-02 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:44.932051 | orchestrator | 2026-03-02 00:47:44 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:44.934169 | orchestrator | 2026-03-02 00:47:44 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state STARTED 2026-03-02 00:47:44.934829 | orchestrator | 2026-03-02 00:47:44 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:44.935663 | orchestrator | 2026-03-02 00:47:44 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:44.935916 | orchestrator | 2026-03-02 00:47:44 | INFO  | Task 042b2b6e-fb97-4cbb-8b71-40b71bb904ad is in state SUCCESS 2026-03-02 00:47:44.936156 | orchestrator | 2026-03-02 00:47:44.936171 | orchestrator | 2026-03-02 00:47:44.936178 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-02 00:47:44.936185 | orchestrator | 2026-03-02 00:47:44.936192 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-02 00:47:44.936199 | orchestrator | Monday 02 March 2026 00:46:18 +0000 (0:00:00.731) 0:00:00.731 ********** 2026-03-02 00:47:44.936205 | orchestrator | ok: [testbed-manager] => { 2026-03-02 00:47:44.936213 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-02 00:47:44.936220 | orchestrator | } 2026-03-02 00:47:44.936226 | orchestrator | 2026-03-02 00:47:44.936232 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-02 00:47:44.936238 | orchestrator | Monday 02 March 2026 00:46:18 +0000 (0:00:00.440) 0:00:01.172 ********** 2026-03-02 00:47:44.936245 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:44.936251 | orchestrator | 2026-03-02 00:47:44.936258 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-02 00:47:44.936264 | orchestrator | Monday 02 March 2026 00:46:19 +0000 (0:00:01.136) 0:00:02.308 ********** 2026-03-02 00:47:44.936270 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-02 00:47:44.936276 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-02 00:47:44.936280 | orchestrator | 2026-03-02 00:47:44.936284 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-02 00:47:44.936288 | orchestrator | Monday 02 March 2026 00:46:20 +0000 (0:00:00.784) 0:00:03.093 ********** 2026-03-02 00:47:44.936292 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:44.936295 | orchestrator | 2026-03-02 00:47:44.936299 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-02 00:47:44.936303 | orchestrator | Monday 02 March 2026 00:46:23 +0000 (0:00:02.943) 0:00:06.036 ********** 2026-03-02 00:47:44.936306 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:44.936310 | orchestrator | 2026-03-02 00:47:44.936327 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-02 00:47:44.936331 | orchestrator | Monday 02 March 2026 00:46:24 +0000 (0:00:01.488) 0:00:07.525 ********** 2026-03-02 00:47:44.936335 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-02 00:47:44.936339 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:44.936343 | orchestrator | 2026-03-02 00:47:44.936346 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-02 00:47:44.936350 | orchestrator | Monday 02 March 2026 00:46:51 +0000 (0:00:26.943) 0:00:34.469 ********** 2026-03-02 00:47:44.936354 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:44.936357 | orchestrator | 2026-03-02 00:47:44.936361 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:47:44.936365 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:47:44.936382 | orchestrator | 2026-03-02 00:47:44.936386 | orchestrator | 2026-03-02 00:47:44.936390 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:47:44.936394 | orchestrator | Monday 02 March 2026 00:46:55 +0000 (0:00:03.462) 0:00:37.932 ********** 2026-03-02 00:47:44.936398 | orchestrator | =============================================================================== 2026-03-02 00:47:44.936401 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.94s 2026-03-02 00:47:44.936428 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.46s 2026-03-02 00:47:44.936432 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.94s 2026-03-02 00:47:44.936436 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.49s 2026-03-02 00:47:44.936439 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.14s 2026-03-02 00:47:44.936443 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.79s 2026-03-02 00:47:44.936449 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.44s 2026-03-02 00:47:44.936453 | orchestrator | 2026-03-02 00:47:44.936456 | orchestrator | 2026-03-02 00:47:44.936460 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-02 00:47:44.936464 | orchestrator | 2026-03-02 00:47:44.936468 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-02 00:47:44.936471 | orchestrator | Monday 02 March 2026 00:46:16 +0000 (0:00:00.951) 0:00:00.951 ********** 2026-03-02 00:47:44.936475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-02 00:47:44.936480 | orchestrator | 2026-03-02 00:47:44.936483 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-02 00:47:44.936487 | orchestrator | Monday 02 March 2026 00:46:17 +0000 (0:00:00.802) 0:00:01.753 ********** 2026-03-02 00:47:44.936491 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-02 00:47:44.936495 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-02 00:47:44.936499 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-02 00:47:44.936503 | orchestrator | 2026-03-02 00:47:44.936506 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-02 00:47:44.936510 | orchestrator | Monday 02 March 2026 00:46:19 +0000 (0:00:01.841) 0:00:03.595 ********** 2026-03-02 00:47:44.936514 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:44.936517 | orchestrator | 2026-03-02 00:47:44.936521 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-02 00:47:44.936525 | orchestrator | Monday 02 March 2026 00:46:21 +0000 (0:00:01.484) 0:00:05.080 ********** 2026-03-02 00:47:44.936535 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-02 00:47:44.936547 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:44.936553 | orchestrator | 2026-03-02 00:47:44.936560 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-02 00:47:44.936566 | orchestrator | Monday 02 March 2026 00:46:57 +0000 (0:00:36.592) 0:00:41.672 ********** 2026-03-02 00:47:44.936570 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:44.936574 | orchestrator | 2026-03-02 00:47:44.936580 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-02 00:47:44.936587 | orchestrator | Monday 02 March 2026 00:46:59 +0000 (0:00:01.705) 0:00:43.378 ********** 2026-03-02 00:47:44.936592 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:44.936599 | orchestrator | 2026-03-02 00:47:44.936605 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-02 00:47:44.936611 | orchestrator | Monday 02 March 2026 00:47:00 +0000 (0:00:00.986) 0:00:44.365 ********** 2026-03-02 00:47:44.936617 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:44.936623 | orchestrator | 2026-03-02 00:47:44.936629 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-02 00:47:44.936635 | orchestrator | Monday 02 March 2026 00:47:02 +0000 (0:00:01.862) 0:00:46.227 ********** 2026-03-02 00:47:44.936641 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:44.936647 | orchestrator | 2026-03-02 00:47:44.936653 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-02 00:47:44.936659 | orchestrator | Monday 02 March 2026 00:47:02 +0000 (0:00:00.657) 0:00:46.885 ********** 2026-03-02 00:47:44.936666 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:44.936672 | orchestrator | 2026-03-02 00:47:44.936678 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-02 00:47:44.936685 | orchestrator | Monday 02 March 2026 00:47:03 +0000 (0:00:00.538) 0:00:47.423 ********** 2026-03-02 00:47:44.936691 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:44.936698 | orchestrator | 2026-03-02 00:47:44.936705 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:47:44.936711 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:47:44.936718 | orchestrator | 2026-03-02 00:47:44.936725 | orchestrator | 2026-03-02 00:47:44.936731 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:47:44.936738 | orchestrator | Monday 02 March 2026 00:47:03 +0000 (0:00:00.353) 0:00:47.777 ********** 2026-03-02 00:47:44.936744 | orchestrator | =============================================================================== 2026-03-02 00:47:44.936750 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.59s 2026-03-02 00:47:44.936757 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.86s 2026-03-02 00:47:44.936762 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.84s 2026-03-02 00:47:44.936766 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.71s 2026-03-02 00:47:44.936769 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.48s 2026-03-02 00:47:44.936773 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.99s 2026-03-02 00:47:44.936777 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.80s 2026-03-02 00:47:44.936781 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.66s 2026-03-02 00:47:44.936785 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.54s 2026-03-02 00:47:44.936790 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.35s 2026-03-02 00:47:44.936794 | orchestrator | 2026-03-02 00:47:44.936798 | orchestrator | 2026-03-02 00:47:44.936803 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-02 00:47:44.936807 | orchestrator | 2026-03-02 00:47:44.936811 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-02 00:47:44.936819 | orchestrator | Monday 02 March 2026 00:46:34 +0000 (0:00:00.241) 0:00:00.241 ********** 2026-03-02 00:47:44.936828 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:44.936833 | orchestrator | 2026-03-02 00:47:44.936837 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-02 00:47:44.936841 | orchestrator | Monday 02 March 2026 00:46:36 +0000 (0:00:01.935) 0:00:02.177 ********** 2026-03-02 00:47:44.936846 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-02 00:47:44.936850 | orchestrator | 2026-03-02 00:47:44.936854 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-02 00:47:44.936858 | orchestrator | Monday 02 March 2026 00:46:37 +0000 (0:00:00.937) 0:00:03.114 ********** 2026-03-02 00:47:44.936863 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:44.936867 | orchestrator | 2026-03-02 00:47:44.936871 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-02 00:47:44.936877 | orchestrator | Monday 02 March 2026 00:46:38 +0000 (0:00:01.365) 0:00:04.479 ********** 2026-03-02 00:47:44.936884 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-02 00:47:44.936890 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:44.936896 | orchestrator | 2026-03-02 00:47:44.936903 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-02 00:47:44.936909 | orchestrator | Monday 02 March 2026 00:47:37 +0000 (0:00:58.121) 0:01:02.600 ********** 2026-03-02 00:47:44.936916 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:44.936922 | orchestrator | 2026-03-02 00:47:44.936929 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:47:44.936936 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:47:44.936942 | orchestrator | 2026-03-02 00:47:44.936949 | orchestrator | 2026-03-02 00:47:44.936955 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:47:44.936968 | orchestrator | Monday 02 March 2026 00:47:43 +0000 (0:00:06.901) 0:01:09.502 ********** 2026-03-02 00:47:44.936975 | orchestrator | =============================================================================== 2026-03-02 00:47:44.936981 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 58.12s 2026-03-02 00:47:44.936988 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.90s 2026-03-02 00:47:44.936994 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.94s 2026-03-02 00:47:44.937002 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.37s 2026-03-02 00:47:44.937011 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.94s 2026-03-02 00:47:44.937017 | orchestrator | 2026-03-02 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:47.983268 | orchestrator | 2026-03-02 00:47:47 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:47.987289 | orchestrator | 2026-03-02 00:47:47 | INFO  | Task 7f4c2bcb-e5d8-4188-9a90-abb401358745 is in state SUCCESS 2026-03-02 00:47:47.987743 | orchestrator | 2026-03-02 00:47:47.987766 | orchestrator | 2026-03-02 00:47:47.987773 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:47:47.987779 | orchestrator | 2026-03-02 00:47:47.987786 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 00:47:47.987791 | orchestrator | Monday 02 March 2026 00:46:16 +0000 (0:00:00.727) 0:00:00.727 ********** 2026-03-02 00:47:47.987796 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-02 00:47:47.987800 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-02 00:47:47.987804 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-02 00:47:47.987808 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-02 00:47:47.987814 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-02 00:47:47.987831 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-02 00:47:47.987836 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-02 00:47:47.987842 | orchestrator | 2026-03-02 00:47:47.987848 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-02 00:47:47.987854 | orchestrator | 2026-03-02 00:47:47.987860 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-02 00:47:47.987866 | orchestrator | Monday 02 March 2026 00:46:19 +0000 (0:00:02.255) 0:00:02.982 ********** 2026-03-02 00:47:47.987880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:47:47.987890 | orchestrator | 2026-03-02 00:47:47.987895 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-02 00:47:47.987899 | orchestrator | Monday 02 March 2026 00:46:20 +0000 (0:00:01.388) 0:00:04.370 ********** 2026-03-02 00:47:47.987903 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:47.987909 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:47:47.987915 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:47:47.987921 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:47:47.987928 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:47:47.987933 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:47:47.987939 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:47:47.987945 | orchestrator | 2026-03-02 00:47:47.987949 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-02 00:47:47.987953 | orchestrator | Monday 02 March 2026 00:46:22 +0000 (0:00:02.293) 0:00:06.664 ********** 2026-03-02 00:47:47.987957 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:47.987960 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:47:47.987964 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:47:47.987968 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:47:47.987975 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:47:47.987979 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:47:47.987982 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:47:47.987986 | orchestrator | 2026-03-02 00:47:47.987992 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-02 00:47:47.987998 | orchestrator | Monday 02 March 2026 00:46:26 +0000 (0:00:03.480) 0:00:10.144 ********** 2026-03-02 00:47:47.988005 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:47.988010 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:47:47.988016 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:47:47.988022 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:47:47.988027 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:47:47.988032 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:47:47.988037 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:47:47.988043 | orchestrator | 2026-03-02 00:47:47.988048 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-02 00:47:47.988053 | orchestrator | Monday 02 March 2026 00:46:28 +0000 (0:00:02.210) 0:00:12.354 ********** 2026-03-02 00:47:47.988059 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:47:47.988064 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:47:47.988070 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:47:47.988076 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:47:47.988083 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:47:47.988087 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:47:47.988090 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:47.988094 | orchestrator | 2026-03-02 00:47:47.988098 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-02 00:47:47.988102 | orchestrator | Monday 02 March 2026 00:46:41 +0000 (0:00:13.100) 0:00:25.454 ********** 2026-03-02 00:47:47.988105 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:47:47.988109 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:47:47.988113 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:47:47.988121 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:47:47.988124 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:47:47.988128 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:47:47.988132 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:47.988136 | orchestrator | 2026-03-02 00:47:47.988139 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-02 00:47:47.988143 | orchestrator | Monday 02 March 2026 00:47:20 +0000 (0:00:39.276) 0:01:04.731 ********** 2026-03-02 00:47:47.988147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:47:47.988152 | orchestrator | 2026-03-02 00:47:47.988156 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-02 00:47:47.988160 | orchestrator | Monday 02 March 2026 00:47:22 +0000 (0:00:01.322) 0:01:06.053 ********** 2026-03-02 00:47:47.988164 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-02 00:47:47.988168 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-02 00:47:47.988172 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-02 00:47:47.988175 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-02 00:47:47.988186 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-02 00:47:47.988190 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-02 00:47:47.988193 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-02 00:47:47.988197 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-02 00:47:47.988201 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-02 00:47:47.988205 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-02 00:47:47.988208 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-02 00:47:47.988212 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-02 00:47:47.988216 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-02 00:47:47.988219 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-02 00:47:47.988223 | orchestrator | 2026-03-02 00:47:47.988227 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-02 00:47:47.988231 | orchestrator | Monday 02 March 2026 00:47:25 +0000 (0:00:03.512) 0:01:09.565 ********** 2026-03-02 00:47:47.988235 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:47.988239 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:47:47.988243 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:47:47.988246 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:47:47.988250 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:47:47.988300 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:47:47.988306 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:47:47.988309 | orchestrator | 2026-03-02 00:47:47.988313 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-02 00:47:47.988317 | orchestrator | Monday 02 March 2026 00:47:26 +0000 (0:00:00.985) 0:01:10.550 ********** 2026-03-02 00:47:47.988321 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:47.988325 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:47:47.988329 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:47:47.988335 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:47:47.988341 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:47:47.988347 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:47:47.988353 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:47:47.988360 | orchestrator | 2026-03-02 00:47:47.988364 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-02 00:47:47.988368 | orchestrator | Monday 02 March 2026 00:47:27 +0000 (0:00:01.181) 0:01:11.732 ********** 2026-03-02 00:47:47.988417 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:47.988423 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:47:47.988435 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:47:47.988439 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:47:47.988443 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:47:47.988447 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:47:47.988450 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:47:47.988454 | orchestrator | 2026-03-02 00:47:47.988458 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-02 00:47:47.988465 | orchestrator | Monday 02 March 2026 00:47:29 +0000 (0:00:01.214) 0:01:12.947 ********** 2026-03-02 00:47:47.988469 | orchestrator | ok: [testbed-manager] 2026-03-02 00:47:47.988472 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:47:47.988476 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:47:47.988480 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:47:47.988483 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:47:47.988487 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:47:47.988491 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:47:47.988494 | orchestrator | 2026-03-02 00:47:47.988498 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-02 00:47:47.988502 | orchestrator | Monday 02 March 2026 00:47:31 +0000 (0:00:01.938) 0:01:14.885 ********** 2026-03-02 00:47:47.988506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-02 00:47:47.988511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:47:47.988515 | orchestrator | 2026-03-02 00:47:47.988519 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-02 00:47:47.988523 | orchestrator | Monday 02 March 2026 00:47:32 +0000 (0:00:01.162) 0:01:16.048 ********** 2026-03-02 00:47:47.988526 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:47.988530 | orchestrator | 2026-03-02 00:47:47.988534 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-02 00:47:47.988537 | orchestrator | Monday 02 March 2026 00:47:33 +0000 (0:00:01.706) 0:01:17.755 ********** 2026-03-02 00:47:47.988541 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:47:47.988545 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:47:47.988548 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:47:47.988552 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:47:47.988556 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:47:47.988559 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:47:47.988563 | orchestrator | changed: [testbed-manager] 2026-03-02 00:47:47.988567 | orchestrator | 2026-03-02 00:47:47.988570 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:47:47.988574 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:47:47.988579 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:47:47.988582 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:47:47.988586 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:47:47.988594 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:47:47.988598 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:47:47.988602 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:47:47.988605 | orchestrator | 2026-03-02 00:47:47.988612 | orchestrator | 2026-03-02 00:47:47.988616 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:47:47.988619 | orchestrator | Monday 02 March 2026 00:47:45 +0000 (0:00:11.212) 0:01:28.967 ********** 2026-03-02 00:47:47.988623 | orchestrator | =============================================================================== 2026-03-02 00:47:47.988627 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.28s 2026-03-02 00:47:47.988630 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.10s 2026-03-02 00:47:47.988634 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.21s 2026-03-02 00:47:47.988638 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.51s 2026-03-02 00:47:47.988641 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.48s 2026-03-02 00:47:47.988645 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.29s 2026-03-02 00:47:47.988649 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.26s 2026-03-02 00:47:47.988652 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.21s 2026-03-02 00:47:47.988656 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.94s 2026-03-02 00:47:47.988660 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.71s 2026-03-02 00:47:47.988663 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.39s 2026-03-02 00:47:47.988667 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.32s 2026-03-02 00:47:47.988671 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.21s 2026-03-02 00:47:47.988674 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.18s 2026-03-02 00:47:47.988678 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.16s 2026-03-02 00:47:47.988682 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.99s 2026-03-02 00:47:47.988688 | orchestrator | 2026-03-02 00:47:47 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:47.988692 | orchestrator | 2026-03-02 00:47:47 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:47.988696 | orchestrator | 2026-03-02 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:51.030800 | orchestrator | 2026-03-02 00:47:51 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:51.035005 | orchestrator | 2026-03-02 00:47:51 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:51.039553 | orchestrator | 2026-03-02 00:47:51 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:51.039602 | orchestrator | 2026-03-02 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:54.074083 | orchestrator | 2026-03-02 00:47:54 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:54.074802 | orchestrator | 2026-03-02 00:47:54 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:54.075593 | orchestrator | 2026-03-02 00:47:54 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:54.075628 | orchestrator | 2026-03-02 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:47:57.109819 | orchestrator | 2026-03-02 00:47:57 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:47:57.110760 | orchestrator | 2026-03-02 00:47:57 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:47:57.111885 | orchestrator | 2026-03-02 00:47:57 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:47:57.112032 | orchestrator | 2026-03-02 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:00.152298 | orchestrator | 2026-03-02 00:48:00 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:00.154296 | orchestrator | 2026-03-02 00:48:00 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:00.155727 | orchestrator | 2026-03-02 00:48:00 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:48:00.156602 | orchestrator | 2026-03-02 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:03.206899 | orchestrator | 2026-03-02 00:48:03 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:03.208873 | orchestrator | 2026-03-02 00:48:03 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:03.211700 | orchestrator | 2026-03-02 00:48:03 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:48:03.211822 | orchestrator | 2026-03-02 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:06.248797 | orchestrator | 2026-03-02 00:48:06 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:06.249735 | orchestrator | 2026-03-02 00:48:06 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:06.251051 | orchestrator | 2026-03-02 00:48:06 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:48:06.251093 | orchestrator | 2026-03-02 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:09.278718 | orchestrator | 2026-03-02 00:48:09 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:09.278883 | orchestrator | 2026-03-02 00:48:09 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:09.280313 | orchestrator | 2026-03-02 00:48:09 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:48:09.280360 | orchestrator | 2026-03-02 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:12.320240 | orchestrator | 2026-03-02 00:48:12 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:12.322835 | orchestrator | 2026-03-02 00:48:12 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:12.326821 | orchestrator | 2026-03-02 00:48:12 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:48:12.326878 | orchestrator | 2026-03-02 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:15.375509 | orchestrator | 2026-03-02 00:48:15 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:15.376474 | orchestrator | 2026-03-02 00:48:15 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:15.377480 | orchestrator | 2026-03-02 00:48:15 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state STARTED 2026-03-02 00:48:15.377681 | orchestrator | 2026-03-02 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:18.411332 | orchestrator | 2026-03-02 00:48:18 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:18.411471 | orchestrator | 2026-03-02 00:48:18 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:18.416967 | orchestrator | 2026-03-02 00:48:18 | INFO  | Task 23dacfef-1407-4c4f-9a7c-7e2640ec7b7d is in state SUCCESS 2026-03-02 00:48:18.418695 | orchestrator | 2026-03-02 00:48:18.418764 | orchestrator | 2026-03-02 00:48:18.418791 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-02 00:48:18.418800 | orchestrator | 2026-03-02 00:48:18.418806 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-02 00:48:18.418813 | orchestrator | Monday 02 March 2026 00:46:09 +0000 (0:00:00.221) 0:00:00.221 ********** 2026-03-02 00:48:18.418821 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:48:18.418829 | orchestrator | 2026-03-02 00:48:18.418835 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-02 00:48:18.418841 | orchestrator | Monday 02 March 2026 00:46:10 +0000 (0:00:01.109) 0:00:01.331 ********** 2026-03-02 00:48:18.418847 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-02 00:48:18.418852 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-02 00:48:18.418859 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-02 00:48:18.418865 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-02 00:48:18.418871 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-02 00:48:18.418877 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-02 00:48:18.418883 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-02 00:48:18.418889 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-02 00:48:18.418894 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-02 00:48:18.418900 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-02 00:48:18.418906 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-02 00:48:18.418913 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-02 00:48:18.418920 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-02 00:48:18.418926 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-02 00:48:18.418933 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-02 00:48:18.418939 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-02 00:48:18.418945 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-02 00:48:18.418950 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-02 00:48:18.418956 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-02 00:48:18.418962 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-02 00:48:18.418968 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-02 00:48:18.418973 | orchestrator | 2026-03-02 00:48:18.418979 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-02 00:48:18.418984 | orchestrator | Monday 02 March 2026 00:46:14 +0000 (0:00:03.979) 0:00:05.310 ********** 2026-03-02 00:48:18.418990 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:48:18.418998 | orchestrator | 2026-03-02 00:48:18.419003 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-02 00:48:18.419009 | orchestrator | Monday 02 March 2026 00:46:15 +0000 (0:00:01.183) 0:00:06.493 ********** 2026-03-02 00:48:18.419018 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.419032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.419054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.419064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.419068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.419073 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.419082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.419110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419119 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419155 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419159 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.419171 | orchestrator | 2026-03-02 00:48:18.419175 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-02 00:48:18.419179 | orchestrator | Monday 02 March 2026 00:46:20 +0000 (0:00:04.815) 0:00:11.308 ********** 2026-03-02 00:48:18.419183 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.419187 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419194 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419199 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:48:18.419209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.419225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.419248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.419254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419289 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:48:18.419301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.419308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419315 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:48:18.419322 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:48:18.419328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419335 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:48:18.419341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.419353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419365 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:48:18.419375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.419450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419464 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:48:18.419470 | orchestrator | 2026-03-02 00:48:18.419477 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-02 00:48:18.419483 | orchestrator | Monday 02 March 2026 00:46:22 +0000 (0:00:01.743) 0:00:13.052 ********** 2026-03-02 00:48:18.419489 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.419495 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419510 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.419522 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:48:18.419532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.419558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419575 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:48:18.419582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.419589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.419603 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:48:18.419610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.421313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.421359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.421371 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:48:18.421454 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:48:18.421467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.421490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.421509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.421518 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:48:18.421528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-02 00:48:18.421538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.421550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.421555 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:48:18.421561 | orchestrator | 2026-03-02 00:48:18.421567 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-02 00:48:18.421573 | orchestrator | Monday 02 March 2026 00:46:24 +0000 (0:00:02.781) 0:00:15.834 ********** 2026-03-02 00:48:18.421577 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:48:18.421582 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:48:18.421587 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:48:18.421592 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:48:18.421596 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:48:18.421610 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:48:18.421617 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:48:18.421625 | orchestrator | 2026-03-02 00:48:18.421633 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-02 00:48:18.421641 | orchestrator | Monday 02 March 2026 00:46:25 +0000 (0:00:00.573) 0:00:16.407 ********** 2026-03-02 00:48:18.421649 | orchestrator | skipping: [testbed-manager] 2026-03-02 00:48:18.421663 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:48:18.421672 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:48:18.421680 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:48:18.421688 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:48:18.421696 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:48:18.421704 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:48:18.421712 | orchestrator | 2026-03-02 00:48:18.421720 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-02 00:48:18.421728 | orchestrator | Monday 02 March 2026 00:46:26 +0000 (0:00:01.318) 0:00:17.726 ********** 2026-03-02 00:48:18.421737 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.421745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.421750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.421756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.421760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.421768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.421779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.421801 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421838 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421903 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421912 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.421929 | orchestrator | 2026-03-02 00:48:18.421938 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-02 00:48:18.421954 | orchestrator | Monday 02 March 2026 00:46:32 +0000 (0:00:05.650) 0:00:23.376 ********** 2026-03-02 00:48:18.421963 | orchestrator | [WARNING]: Skipped 2026-03-02 00:48:18.421972 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-02 00:48:18.421981 | orchestrator | to this access issue: 2026-03-02 00:48:18.421988 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-02 00:48:18.421996 | orchestrator | directory 2026-03-02 00:48:18.422005 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-02 00:48:18.422080 | orchestrator | 2026-03-02 00:48:18.422093 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-02 00:48:18.422102 | orchestrator | Monday 02 March 2026 00:46:34 +0000 (0:00:02.264) 0:00:25.641 ********** 2026-03-02 00:48:18.422110 | orchestrator | [WARNING]: Skipped 2026-03-02 00:48:18.422118 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-02 00:48:18.422132 | orchestrator | to this access issue: 2026-03-02 00:48:18.422140 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-02 00:48:18.422148 | orchestrator | directory 2026-03-02 00:48:18.422156 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-02 00:48:18.422164 | orchestrator | 2026-03-02 00:48:18.422172 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-02 00:48:18.422180 | orchestrator | Monday 02 March 2026 00:46:35 +0000 (0:00:01.005) 0:00:26.647 ********** 2026-03-02 00:48:18.422188 | orchestrator | [WARNING]: Skipped 2026-03-02 00:48:18.422196 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-02 00:48:18.422204 | orchestrator | to this access issue: 2026-03-02 00:48:18.422213 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-02 00:48:18.422221 | orchestrator | directory 2026-03-02 00:48:18.422228 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-02 00:48:18.422237 | orchestrator | 2026-03-02 00:48:18.422244 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-02 00:48:18.422249 | orchestrator | Monday 02 March 2026 00:46:36 +0000 (0:00:01.208) 0:00:27.855 ********** 2026-03-02 00:48:18.422254 | orchestrator | [WARNING]: Skipped 2026-03-02 00:48:18.422261 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-02 00:48:18.422269 | orchestrator | to this access issue: 2026-03-02 00:48:18.422277 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-02 00:48:18.422286 | orchestrator | directory 2026-03-02 00:48:18.422294 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-02 00:48:18.422302 | orchestrator | 2026-03-02 00:48:18.422310 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-02 00:48:18.422318 | orchestrator | Monday 02 March 2026 00:46:38 +0000 (0:00:01.482) 0:00:29.338 ********** 2026-03-02 00:48:18.422326 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:48:18.422351 | orchestrator | changed: [testbed-manager] 2026-03-02 00:48:18.422360 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:48:18.422368 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:48:18.422376 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:48:18.422405 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:48:18.422413 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:48:18.422421 | orchestrator | 2026-03-02 00:48:18.422429 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-02 00:48:18.422437 | orchestrator | Monday 02 March 2026 00:46:43 +0000 (0:00:04.879) 0:00:34.217 ********** 2026-03-02 00:48:18.422445 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-02 00:48:18.422452 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-02 00:48:18.422457 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-02 00:48:18.422468 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-02 00:48:18.422472 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-02 00:48:18.422477 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-02 00:48:18.422482 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-02 00:48:18.422487 | orchestrator | 2026-03-02 00:48:18.422491 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-02 00:48:18.422496 | orchestrator | Monday 02 March 2026 00:46:47 +0000 (0:00:03.982) 0:00:38.200 ********** 2026-03-02 00:48:18.422501 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:48:18.422506 | orchestrator | changed: [testbed-manager] 2026-03-02 00:48:18.422510 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:48:18.422515 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:48:18.422520 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:48:18.422524 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:48:18.422529 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:48:18.422534 | orchestrator | 2026-03-02 00:48:18.422538 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-02 00:48:18.422543 | orchestrator | Monday 02 March 2026 00:46:49 +0000 (0:00:02.240) 0:00:40.441 ********** 2026-03-02 00:48:18.422553 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.422569 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422574 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.422579 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.422593 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422599 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.422617 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422622 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.422639 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.422650 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:48:18.422663 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422675 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422681 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422686 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422694 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422699 | orchestrator | 2026-03-02 00:48:18.422704 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-02 00:48:18.422709 | orchestrator | Monday 02 March 2026 00:46:51 +0000 (0:00:02.194) 0:00:42.636 ********** 2026-03-02 00:48:18.422714 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-02 00:48:18.422719 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-02 00:48:18.422724 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-02 00:48:18.422728 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-02 00:48:18.422733 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-02 00:48:18.422738 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-02 00:48:18.422743 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-02 00:48:18.422747 | orchestrator | 2026-03-02 00:48:18.422752 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-02 00:48:18.422757 | orchestrator | Monday 02 March 2026 00:46:54 +0000 (0:00:02.976) 0:00:45.612 ********** 2026-03-02 00:48:18.422762 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-02 00:48:18.422767 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-02 00:48:18.422772 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-02 00:48:18.422777 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-02 00:48:18.422781 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-02 00:48:18.422786 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-02 00:48:18.422791 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-02 00:48:18.422796 | orchestrator | 2026-03-02 00:48:18.422800 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-02 00:48:18.422805 | orchestrator | Monday 02 March 2026 00:46:57 +0000 (0:00:02.784) 0:00:48.396 ********** 2026-03-02 00:48:18.422810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422828 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422859 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-02 00:48:18.422897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422907 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422933 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422943 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422948 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:48:18.422953 | orchestrator | 2026-03-02 00:48:18.422958 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-02 00:48:18.422963 | orchestrator | Monday 02 March 2026 00:47:01 +0000 (0:00:03.825) 0:00:52.222 ********** 2026-03-02 00:48:18.422967 | orchestrator | changed: [testbed-manager] 2026-03-02 00:48:18.422972 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:48:18.422977 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:48:18.422982 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:48:18.422986 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:48:18.422991 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:48:18.422996 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:48:18.423000 | orchestrator | 2026-03-02 00:48:18.423005 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-02 00:48:18.423010 | orchestrator | Monday 02 March 2026 00:47:03 +0000 (0:00:01.668) 0:00:53.891 ********** 2026-03-02 00:48:18.423015 | orchestrator | changed: [testbed-manager] 2026-03-02 00:48:18.423019 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:48:18.423024 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:48:18.423029 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:48:18.423034 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:48:18.423039 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:48:18.423043 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:48:18.423048 | orchestrator | 2026-03-02 00:48:18.423053 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-02 00:48:18.423058 | orchestrator | Monday 02 March 2026 00:47:04 +0000 (0:00:01.424) 0:00:55.315 ********** 2026-03-02 00:48:18.423062 | orchestrator | 2026-03-02 00:48:18.423071 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-02 00:48:18.423076 | orchestrator | Monday 02 March 2026 00:47:04 +0000 (0:00:00.063) 0:00:55.378 ********** 2026-03-02 00:48:18.423080 | orchestrator | 2026-03-02 00:48:18.423085 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-02 00:48:18.423090 | orchestrator | Monday 02 March 2026 00:47:04 +0000 (0:00:00.061) 0:00:55.440 ********** 2026-03-02 00:48:18.423095 | orchestrator | 2026-03-02 00:48:18.423099 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-02 00:48:18.423107 | orchestrator | Monday 02 March 2026 00:47:04 +0000 (0:00:00.173) 0:00:55.613 ********** 2026-03-02 00:48:18.423112 | orchestrator | 2026-03-02 00:48:18.423116 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-02 00:48:18.423121 | orchestrator | Monday 02 March 2026 00:47:04 +0000 (0:00:00.067) 0:00:55.681 ********** 2026-03-02 00:48:18.423126 | orchestrator | 2026-03-02 00:48:18.423131 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-02 00:48:18.423135 | orchestrator | Monday 02 March 2026 00:47:04 +0000 (0:00:00.060) 0:00:55.741 ********** 2026-03-02 00:48:18.423140 | orchestrator | 2026-03-02 00:48:18.423145 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-02 00:48:18.423149 | orchestrator | Monday 02 March 2026 00:47:04 +0000 (0:00:00.060) 0:00:55.802 ********** 2026-03-02 00:48:18.423154 | orchestrator | 2026-03-02 00:48:18.423159 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-02 00:48:18.423167 | orchestrator | Monday 02 March 2026 00:47:05 +0000 (0:00:00.083) 0:00:55.886 ********** 2026-03-02 00:48:18.423172 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:48:18.423176 | orchestrator | changed: [testbed-manager] 2026-03-02 00:48:18.423181 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:48:18.423186 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:48:18.423191 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:48:18.423195 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:48:18.423200 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:48:18.423205 | orchestrator | 2026-03-02 00:48:18.423210 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-02 00:48:18.423215 | orchestrator | Monday 02 March 2026 00:47:36 +0000 (0:00:31.262) 0:01:27.149 ********** 2026-03-02 00:48:18.423220 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:48:18.423225 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:48:18.423229 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:48:18.423234 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:48:18.423239 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:48:18.423243 | orchestrator | changed: [testbed-manager] 2026-03-02 00:48:18.423248 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:48:18.423253 | orchestrator | 2026-03-02 00:48:18.423258 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-02 00:48:18.423263 | orchestrator | Monday 02 March 2026 00:48:06 +0000 (0:00:30.617) 0:01:57.766 ********** 2026-03-02 00:48:18.423268 | orchestrator | ok: [testbed-manager] 2026-03-02 00:48:18.423272 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:48:18.423277 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:48:18.423282 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:48:18.423287 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:48:18.423292 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:48:18.423297 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:48:18.423301 | orchestrator | 2026-03-02 00:48:18.423306 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-02 00:48:18.423311 | orchestrator | Monday 02 March 2026 00:48:09 +0000 (0:00:02.173) 0:01:59.940 ********** 2026-03-02 00:48:18.423316 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:48:18.423321 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:48:18.423326 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:48:18.423330 | orchestrator | changed: [testbed-manager] 2026-03-02 00:48:18.423339 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:48:18.423344 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:48:18.423349 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:48:18.423353 | orchestrator | 2026-03-02 00:48:18.423358 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:48:18.423364 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-02 00:48:18.423369 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-02 00:48:18.423374 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-02 00:48:18.423398 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-02 00:48:18.423407 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-02 00:48:18.423414 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-02 00:48:18.423418 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-02 00:48:18.423423 | orchestrator | 2026-03-02 00:48:18.423430 | orchestrator | 2026-03-02 00:48:18.423438 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:48:18.423446 | orchestrator | Monday 02 March 2026 00:48:17 +0000 (0:00:08.840) 0:02:08.780 ********** 2026-03-02 00:48:18.423453 | orchestrator | =============================================================================== 2026-03-02 00:48:18.423460 | orchestrator | common : Restart fluentd container ------------------------------------- 31.26s 2026-03-02 00:48:18.423468 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 30.62s 2026-03-02 00:48:18.423476 | orchestrator | common : Restart cron container ----------------------------------------- 8.84s 2026-03-02 00:48:18.423483 | orchestrator | common : Copying over config.json files for services -------------------- 5.65s 2026-03-02 00:48:18.423490 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.88s 2026-03-02 00:48:18.423503 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.82s 2026-03-02 00:48:18.423511 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.98s 2026-03-02 00:48:18.423518 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.98s 2026-03-02 00:48:18.423526 | orchestrator | common : Check common containers ---------------------------------------- 3.83s 2026-03-02 00:48:18.423533 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.98s 2026-03-02 00:48:18.423541 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.78s 2026-03-02 00:48:18.423548 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.78s 2026-03-02 00:48:18.423554 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.26s 2026-03-02 00:48:18.423561 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.24s 2026-03-02 00:48:18.423573 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.19s 2026-03-02 00:48:18.423581 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.17s 2026-03-02 00:48:18.423589 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.74s 2026-03-02 00:48:18.423597 | orchestrator | common : Creating log volume -------------------------------------------- 1.67s 2026-03-02 00:48:18.423604 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.48s 2026-03-02 00:48:18.423617 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.42s 2026-03-02 00:48:18.423623 | orchestrator | 2026-03-02 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:21.460623 | orchestrator | 2026-03-02 00:48:21 | INFO  | Task eefac293-04fc-473a-a879-a87f976441d5 is in state STARTED 2026-03-02 00:48:21.461797 | orchestrator | 2026-03-02 00:48:21 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:21.461998 | orchestrator | 2026-03-02 00:48:21 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:21.462652 | orchestrator | 2026-03-02 00:48:21 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:21.463481 | orchestrator | 2026-03-02 00:48:21 | INFO  | Task 33d38129-2548-4205-95f6-2614b0b02654 is in state STARTED 2026-03-02 00:48:21.464073 | orchestrator | 2026-03-02 00:48:21 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:21.464116 | orchestrator | 2026-03-02 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:24.491656 | orchestrator | 2026-03-02 00:48:24 | INFO  | Task eefac293-04fc-473a-a879-a87f976441d5 is in state STARTED 2026-03-02 00:48:24.491831 | orchestrator | 2026-03-02 00:48:24 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:24.492318 | orchestrator | 2026-03-02 00:48:24 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:24.492906 | orchestrator | 2026-03-02 00:48:24 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:24.493455 | orchestrator | 2026-03-02 00:48:24 | INFO  | Task 33d38129-2548-4205-95f6-2614b0b02654 is in state STARTED 2026-03-02 00:48:24.494143 | orchestrator | 2026-03-02 00:48:24 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:24.494286 | orchestrator | 2026-03-02 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:27.522983 | orchestrator | 2026-03-02 00:48:27 | INFO  | Task eefac293-04fc-473a-a879-a87f976441d5 is in state STARTED 2026-03-02 00:48:27.523511 | orchestrator | 2026-03-02 00:48:27 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:27.524004 | orchestrator | 2026-03-02 00:48:27 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:27.524659 | orchestrator | 2026-03-02 00:48:27 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:27.525424 | orchestrator | 2026-03-02 00:48:27 | INFO  | Task 33d38129-2548-4205-95f6-2614b0b02654 is in state STARTED 2026-03-02 00:48:27.525968 | orchestrator | 2026-03-02 00:48:27 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:27.526074 | orchestrator | 2026-03-02 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:30.553968 | orchestrator | 2026-03-02 00:48:30 | INFO  | Task eefac293-04fc-473a-a879-a87f976441d5 is in state STARTED 2026-03-02 00:48:30.555031 | orchestrator | 2026-03-02 00:48:30 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:30.555894 | orchestrator | 2026-03-02 00:48:30 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:30.557054 | orchestrator | 2026-03-02 00:48:30 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:30.558319 | orchestrator | 2026-03-02 00:48:30 | INFO  | Task 33d38129-2548-4205-95f6-2614b0b02654 is in state STARTED 2026-03-02 00:48:30.559157 | orchestrator | 2026-03-02 00:48:30 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:30.559326 | orchestrator | 2026-03-02 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:33.583650 | orchestrator | 2026-03-02 00:48:33 | INFO  | Task eefac293-04fc-473a-a879-a87f976441d5 is in state STARTED 2026-03-02 00:48:33.583765 | orchestrator | 2026-03-02 00:48:33 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:33.584345 | orchestrator | 2026-03-02 00:48:33 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:33.586626 | orchestrator | 2026-03-02 00:48:33 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:33.586683 | orchestrator | 2026-03-02 00:48:33 | INFO  | Task 33d38129-2548-4205-95f6-2614b0b02654 is in state STARTED 2026-03-02 00:48:33.586694 | orchestrator | 2026-03-02 00:48:33 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:33.586706 | orchestrator | 2026-03-02 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:36.612602 | orchestrator | 2026-03-02 00:48:36 | INFO  | Task eefac293-04fc-473a-a879-a87f976441d5 is in state STARTED 2026-03-02 00:48:36.612880 | orchestrator | 2026-03-02 00:48:36 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:36.613753 | orchestrator | 2026-03-02 00:48:36 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:36.614336 | orchestrator | 2026-03-02 00:48:36 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:36.615166 | orchestrator | 2026-03-02 00:48:36 | INFO  | Task 33d38129-2548-4205-95f6-2614b0b02654 is in state STARTED 2026-03-02 00:48:36.617232 | orchestrator | 2026-03-02 00:48:36 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:36.617277 | orchestrator | 2026-03-02 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:39.656187 | orchestrator | 2026-03-02 00:48:39 | INFO  | Task eefac293-04fc-473a-a879-a87f976441d5 is in state STARTED 2026-03-02 00:48:39.656327 | orchestrator | 2026-03-02 00:48:39 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:39.657107 | orchestrator | 2026-03-02 00:48:39 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:39.658843 | orchestrator | 2026-03-02 00:48:39 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:39.659267 | orchestrator | 2026-03-02 00:48:39 | INFO  | Task 33d38129-2548-4205-95f6-2614b0b02654 is in state SUCCESS 2026-03-02 00:48:39.659849 | orchestrator | 2026-03-02 00:48:39 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:39.659866 | orchestrator | 2026-03-02 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:42.682830 | orchestrator | 2026-03-02 00:48:42 | INFO  | Task eefac293-04fc-473a-a879-a87f976441d5 is in state STARTED 2026-03-02 00:48:42.683224 | orchestrator | 2026-03-02 00:48:42 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:42.683958 | orchestrator | 2026-03-02 00:48:42 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:48:42.685348 | orchestrator | 2026-03-02 00:48:42 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:42.686177 | orchestrator | 2026-03-02 00:48:42 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:42.687713 | orchestrator | 2026-03-02 00:48:42 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:42.687784 | orchestrator | 2026-03-02 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:45.725070 | orchestrator | 2026-03-02 00:48:45 | INFO  | Task eefac293-04fc-473a-a879-a87f976441d5 is in state STARTED 2026-03-02 00:48:45.725444 | orchestrator | 2026-03-02 00:48:45 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:45.727528 | orchestrator | 2026-03-02 00:48:45 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:48:45.728316 | orchestrator | 2026-03-02 00:48:45 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:45.729156 | orchestrator | 2026-03-02 00:48:45 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:45.730008 | orchestrator | 2026-03-02 00:48:45 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:45.731339 | orchestrator | 2026-03-02 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:48.756874 | orchestrator | 2026-03-02 00:48:48 | INFO  | Task eefac293-04fc-473a-a879-a87f976441d5 is in state SUCCESS 2026-03-02 00:48:48.757466 | orchestrator | 2026-03-02 00:48:48.757491 | orchestrator | 2026-03-02 00:48:48.757498 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:48:48.757505 | orchestrator | 2026-03-02 00:48:48.757512 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 00:48:48.757518 | orchestrator | Monday 02 March 2026 00:48:24 +0000 (0:00:00.450) 0:00:00.450 ********** 2026-03-02 00:48:48.757525 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:48:48.757532 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:48:48.757539 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:48:48.757545 | orchestrator | 2026-03-02 00:48:48.757552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 00:48:48.757559 | orchestrator | Monday 02 March 2026 00:48:25 +0000 (0:00:00.550) 0:00:01.000 ********** 2026-03-02 00:48:48.757566 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-02 00:48:48.757573 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-02 00:48:48.757580 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-02 00:48:48.757587 | orchestrator | 2026-03-02 00:48:48.757594 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-02 00:48:48.757601 | orchestrator | 2026-03-02 00:48:48.757607 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-02 00:48:48.757614 | orchestrator | Monday 02 March 2026 00:48:26 +0000 (0:00:00.845) 0:00:01.846 ********** 2026-03-02 00:48:48.757621 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:48:48.757628 | orchestrator | 2026-03-02 00:48:48.757634 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-02 00:48:48.757641 | orchestrator | Monday 02 March 2026 00:48:26 +0000 (0:00:00.793) 0:00:02.639 ********** 2026-03-02 00:48:48.757648 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-02 00:48:48.757655 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-02 00:48:48.757661 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-02 00:48:48.757668 | orchestrator | 2026-03-02 00:48:48.757674 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-02 00:48:48.757680 | orchestrator | Monday 02 March 2026 00:48:27 +0000 (0:00:00.923) 0:00:03.562 ********** 2026-03-02 00:48:48.757686 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-02 00:48:48.757692 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-02 00:48:48.757699 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-02 00:48:48.757721 | orchestrator | 2026-03-02 00:48:48.757728 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-02 00:48:48.757734 | orchestrator | Monday 02 March 2026 00:48:29 +0000 (0:00:01.788) 0:00:05.352 ********** 2026-03-02 00:48:48.757741 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:48:48.757747 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:48:48.757753 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:48:48.757760 | orchestrator | 2026-03-02 00:48:48.757766 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-02 00:48:48.757772 | orchestrator | Monday 02 March 2026 00:48:31 +0000 (0:00:02.032) 0:00:07.384 ********** 2026-03-02 00:48:48.757779 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:48:48.757785 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:48:48.757791 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:48:48.757797 | orchestrator | 2026-03-02 00:48:48.757803 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:48:48.757810 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:48:48.757817 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:48:48.757823 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:48:48.757830 | orchestrator | 2026-03-02 00:48:48.757836 | orchestrator | 2026-03-02 00:48:48.757842 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:48:48.757846 | orchestrator | Monday 02 March 2026 00:48:38 +0000 (0:00:07.226) 0:00:14.611 ********** 2026-03-02 00:48:48.757852 | orchestrator | =============================================================================== 2026-03-02 00:48:48.757858 | orchestrator | memcached : Restart memcached container --------------------------------- 7.23s 2026-03-02 00:48:48.757864 | orchestrator | memcached : Check memcached container ----------------------------------- 2.03s 2026-03-02 00:48:48.757870 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.79s 2026-03-02 00:48:48.757875 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.92s 2026-03-02 00:48:48.757882 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2026-03-02 00:48:48.757888 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.79s 2026-03-02 00:48:48.757904 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.55s 2026-03-02 00:48:48.757911 | orchestrator | 2026-03-02 00:48:48.757917 | orchestrator | 2026-03-02 00:48:48.757923 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:48:48.757929 | orchestrator | 2026-03-02 00:48:48.757935 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 00:48:48.757941 | orchestrator | Monday 02 March 2026 00:48:24 +0000 (0:00:00.343) 0:00:00.343 ********** 2026-03-02 00:48:48.757947 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:48:48.757954 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:48:48.757960 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:48:48.757966 | orchestrator | 2026-03-02 00:48:48.757973 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 00:48:48.757988 | orchestrator | Monday 02 March 2026 00:48:25 +0000 (0:00:00.642) 0:00:00.985 ********** 2026-03-02 00:48:48.757995 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-02 00:48:48.758001 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-02 00:48:48.758007 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-02 00:48:48.758044 | orchestrator | 2026-03-02 00:48:48.758062 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-02 00:48:48.758069 | orchestrator | 2026-03-02 00:48:48.758075 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-02 00:48:48.758100 | orchestrator | Monday 02 March 2026 00:48:26 +0000 (0:00:00.806) 0:00:01.791 ********** 2026-03-02 00:48:48.758107 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:48:48.758114 | orchestrator | 2026-03-02 00:48:48.758120 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-02 00:48:48.758127 | orchestrator | Monday 02 March 2026 00:48:27 +0000 (0:00:00.874) 0:00:02.666 ********** 2026-03-02 00:48:48.758136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758197 | orchestrator | 2026-03-02 00:48:48.758204 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-02 00:48:48.758211 | orchestrator | Monday 02 March 2026 00:48:28 +0000 (0:00:01.282) 0:00:03.948 ********** 2026-03-02 00:48:48.758218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758272 | orchestrator | 2026-03-02 00:48:48.758279 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-02 00:48:48.758286 | orchestrator | Monday 02 March 2026 00:48:30 +0000 (0:00:02.590) 0:00:06.538 ********** 2026-03-02 00:48:48.758293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758343 | orchestrator | 2026-03-02 00:48:48.758353 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-02 00:48:48.758360 | orchestrator | Monday 02 March 2026 00:48:33 +0000 (0:00:02.274) 0:00:08.813 ********** 2026-03-02 00:48:48.758367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-02 00:48:48.758428 | orchestrator | 2026-03-02 00:48:48.758435 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-02 00:48:48.758441 | orchestrator | Monday 02 March 2026 00:48:35 +0000 (0:00:01.952) 0:00:10.765 ********** 2026-03-02 00:48:48.758448 | orchestrator | 2026-03-02 00:48:48.758455 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-02 00:48:48.758465 | orchestrator | Monday 02 March 2026 00:48:35 +0000 (0:00:00.064) 0:00:10.830 ********** 2026-03-02 00:48:48.758472 | orchestrator | 2026-03-02 00:48:48.758478 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-02 00:48:48.758484 | orchestrator | Monday 02 March 2026 00:48:35 +0000 (0:00:00.086) 0:00:10.916 ********** 2026-03-02 00:48:48.758491 | orchestrator | 2026-03-02 00:48:48.758497 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-02 00:48:48.758503 | orchestrator | Monday 02 March 2026 00:48:35 +0000 (0:00:00.111) 0:00:11.027 ********** 2026-03-02 00:48:48.758508 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:48:48.758515 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:48:48.758522 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:48:48.758528 | orchestrator | 2026-03-02 00:48:48.758534 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-02 00:48:48.758540 | orchestrator | Monday 02 March 2026 00:48:39 +0000 (0:00:03.670) 0:00:14.698 ********** 2026-03-02 00:48:48.758545 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:48:48.758550 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:48:48.758556 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:48:48.758562 | orchestrator | 2026-03-02 00:48:48.758567 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:48:48.758573 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:48:48.758579 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:48:48.758586 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:48:48.758592 | orchestrator | 2026-03-02 00:48:48.758599 | orchestrator | 2026-03-02 00:48:48.758605 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:48:48.758612 | orchestrator | Monday 02 March 2026 00:48:47 +0000 (0:00:08.036) 0:00:22.735 ********** 2026-03-02 00:48:48.758618 | orchestrator | =============================================================================== 2026-03-02 00:48:48.758624 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.04s 2026-03-02 00:48:48.758630 | orchestrator | redis : Restart redis container ----------------------------------------- 3.67s 2026-03-02 00:48:48.758636 | orchestrator | redis : Copying over default config.json files -------------------------- 2.59s 2026-03-02 00:48:48.758642 | orchestrator | redis : Copying over redis config files --------------------------------- 2.27s 2026-03-02 00:48:48.758649 | orchestrator | redis : Check redis containers ------------------------------------------ 1.95s 2026-03-02 00:48:48.758655 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.28s 2026-03-02 00:48:48.758661 | orchestrator | redis : include_tasks --------------------------------------------------- 0.87s 2026-03-02 00:48:48.758667 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-03-02 00:48:48.758673 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.64s 2026-03-02 00:48:48.758685 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.26s 2026-03-02 00:48:48.758692 | orchestrator | 2026-03-02 00:48:48 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:48.759072 | orchestrator | 2026-03-02 00:48:48 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:48:48.759820 | orchestrator | 2026-03-02 00:48:48 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:48.760605 | orchestrator | 2026-03-02 00:48:48 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:48.761557 | orchestrator | 2026-03-02 00:48:48 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:48.763666 | orchestrator | 2026-03-02 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:51.790202 | orchestrator | 2026-03-02 00:48:51 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:51.790838 | orchestrator | 2026-03-02 00:48:51 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:48:51.791707 | orchestrator | 2026-03-02 00:48:51 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:51.792515 | orchestrator | 2026-03-02 00:48:51 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:51.793269 | orchestrator | 2026-03-02 00:48:51 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:51.793572 | orchestrator | 2026-03-02 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:54.871368 | orchestrator | 2026-03-02 00:48:54 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:54.871482 | orchestrator | 2026-03-02 00:48:54 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:48:54.873144 | orchestrator | 2026-03-02 00:48:54 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:54.873695 | orchestrator | 2026-03-02 00:48:54 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:54.874449 | orchestrator | 2026-03-02 00:48:54 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:54.874480 | orchestrator | 2026-03-02 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:48:57.920781 | orchestrator | 2026-03-02 00:48:57 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:48:57.920877 | orchestrator | 2026-03-02 00:48:57 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:48:57.920886 | orchestrator | 2026-03-02 00:48:57 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:48:57.920902 | orchestrator | 2026-03-02 00:48:57 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:48:57.920910 | orchestrator | 2026-03-02 00:48:57 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:48:57.920917 | orchestrator | 2026-03-02 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:00.955801 | orchestrator | 2026-03-02 00:49:00 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:00.955874 | orchestrator | 2026-03-02 00:49:00 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:00.960700 | orchestrator | 2026-03-02 00:49:00 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:00.960821 | orchestrator | 2026-03-02 00:49:00 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:00.960834 | orchestrator | 2026-03-02 00:49:00 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:49:00.960842 | orchestrator | 2026-03-02 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:04.014067 | orchestrator | 2026-03-02 00:49:04 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:04.033793 | orchestrator | 2026-03-02 00:49:04 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:04.033879 | orchestrator | 2026-03-02 00:49:04 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:04.033889 | orchestrator | 2026-03-02 00:49:04 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:04.033895 | orchestrator | 2026-03-02 00:49:04 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:49:04.033902 | orchestrator | 2026-03-02 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:07.123456 | orchestrator | 2026-03-02 00:49:07 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:07.123543 | orchestrator | 2026-03-02 00:49:07 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:07.123551 | orchestrator | 2026-03-02 00:49:07 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:07.123558 | orchestrator | 2026-03-02 00:49:07 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:07.123564 | orchestrator | 2026-03-02 00:49:07 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:49:07.123570 | orchestrator | 2026-03-02 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:10.274832 | orchestrator | 2026-03-02 00:49:10 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:10.274903 | orchestrator | 2026-03-02 00:49:10 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:10.276000 | orchestrator | 2026-03-02 00:49:10 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:10.276586 | orchestrator | 2026-03-02 00:49:10 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:10.278092 | orchestrator | 2026-03-02 00:49:10 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:49:10.278134 | orchestrator | 2026-03-02 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:13.307490 | orchestrator | 2026-03-02 00:49:13 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:13.307596 | orchestrator | 2026-03-02 00:49:13 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:13.307955 | orchestrator | 2026-03-02 00:49:13 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:13.309175 | orchestrator | 2026-03-02 00:49:13 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:13.310112 | orchestrator | 2026-03-02 00:49:13 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:49:13.310162 | orchestrator | 2026-03-02 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:16.360244 | orchestrator | 2026-03-02 00:49:16 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:16.360804 | orchestrator | 2026-03-02 00:49:16 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:16.361929 | orchestrator | 2026-03-02 00:49:16 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:16.364663 | orchestrator | 2026-03-02 00:49:16 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:16.364710 | orchestrator | 2026-03-02 00:49:16 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:49:16.364721 | orchestrator | 2026-03-02 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:19.516705 | orchestrator | 2026-03-02 00:49:19 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:19.520019 | orchestrator | 2026-03-02 00:49:19 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:19.521961 | orchestrator | 2026-03-02 00:49:19 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:19.523495 | orchestrator | 2026-03-02 00:49:19 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:19.525144 | orchestrator | 2026-03-02 00:49:19 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:49:19.525385 | orchestrator | 2026-03-02 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:22.555656 | orchestrator | 2026-03-02 00:49:22 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:22.555876 | orchestrator | 2026-03-02 00:49:22 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:22.558529 | orchestrator | 2026-03-02 00:49:22 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:22.559014 | orchestrator | 2026-03-02 00:49:22 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:22.559768 | orchestrator | 2026-03-02 00:49:22 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:49:22.559832 | orchestrator | 2026-03-02 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:25.675598 | orchestrator | 2026-03-02 00:49:25 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:25.675699 | orchestrator | 2026-03-02 00:49:25 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:25.678290 | orchestrator | 2026-03-02 00:49:25 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:25.678737 | orchestrator | 2026-03-02 00:49:25 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:25.679375 | orchestrator | 2026-03-02 00:49:25 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:49:25.679524 | orchestrator | 2026-03-02 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:28.704658 | orchestrator | 2026-03-02 00:49:28 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:28.704764 | orchestrator | 2026-03-02 00:49:28 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:28.706233 | orchestrator | 2026-03-02 00:49:28 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:28.706845 | orchestrator | 2026-03-02 00:49:28 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:28.709229 | orchestrator | 2026-03-02 00:49:28 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state STARTED 2026-03-02 00:49:28.709316 | orchestrator | 2026-03-02 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:31.748665 | orchestrator | 2026-03-02 00:49:31 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:31.750387 | orchestrator | 2026-03-02 00:49:31 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:31.752534 | orchestrator | 2026-03-02 00:49:31 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:31.754643 | orchestrator | 2026-03-02 00:49:31 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:31.756297 | orchestrator | 2026-03-02 00:49:31 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:49:31.758125 | orchestrator | 2026-03-02 00:49:31 | INFO  | Task 2d4626af-ad16-4504-b20c-91efe8600d89 is in state SUCCESS 2026-03-02 00:49:31.759490 | orchestrator | 2026-03-02 00:49:31.759541 | orchestrator | 2026-03-02 00:49:31.759550 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:49:31.759559 | orchestrator | 2026-03-02 00:49:31.759566 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 00:49:31.759573 | orchestrator | Monday 02 March 2026 00:48:24 +0000 (0:00:00.450) 0:00:00.450 ********** 2026-03-02 00:49:31.759580 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:49:31.759589 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:49:31.759595 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:49:31.759602 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:49:31.759609 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:49:31.759616 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:49:31.759623 | orchestrator | 2026-03-02 00:49:31.759629 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 00:49:31.759637 | orchestrator | Monday 02 March 2026 00:48:26 +0000 (0:00:01.417) 0:00:01.868 ********** 2026-03-02 00:49:31.759642 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-02 00:49:31.759647 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-02 00:49:31.759652 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-02 00:49:31.759656 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-02 00:49:31.759661 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-02 00:49:31.759665 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-02 00:49:31.759669 | orchestrator | 2026-03-02 00:49:31.759673 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-02 00:49:31.759678 | orchestrator | 2026-03-02 00:49:31.759682 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-02 00:49:31.759686 | orchestrator | Monday 02 March 2026 00:48:27 +0000 (0:00:01.213) 0:00:03.082 ********** 2026-03-02 00:49:31.759692 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:49:31.759697 | orchestrator | 2026-03-02 00:49:31.759701 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-02 00:49:31.759705 | orchestrator | Monday 02 March 2026 00:48:28 +0000 (0:00:01.176) 0:00:04.258 ********** 2026-03-02 00:49:31.759709 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-02 00:49:31.759714 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-02 00:49:31.759718 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-02 00:49:31.759723 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-02 00:49:31.759727 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-02 00:49:31.759732 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-02 00:49:31.759739 | orchestrator | 2026-03-02 00:49:31.759770 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-02 00:49:31.759777 | orchestrator | Monday 02 March 2026 00:48:29 +0000 (0:00:01.116) 0:00:05.375 ********** 2026-03-02 00:49:31.759783 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-02 00:49:31.759789 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-02 00:49:31.759796 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-02 00:49:31.759802 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-02 00:49:31.759809 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-02 00:49:31.759815 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-02 00:49:31.759823 | orchestrator | 2026-03-02 00:49:31.759829 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-02 00:49:31.759836 | orchestrator | Monday 02 March 2026 00:48:31 +0000 (0:00:02.100) 0:00:07.475 ********** 2026-03-02 00:49:31.759844 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-02 00:49:31.759851 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:49:31.759859 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-02 00:49:31.759863 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:49:31.759868 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-02 00:49:31.759872 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:49:31.759887 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-02 00:49:31.759891 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:49:31.759895 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-02 00:49:31.759900 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:49:31.759906 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-02 00:49:31.759913 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:49:31.759920 | orchestrator | 2026-03-02 00:49:31.759926 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-02 00:49:31.759933 | orchestrator | Monday 02 March 2026 00:48:32 +0000 (0:00:01.179) 0:00:08.655 ********** 2026-03-02 00:49:31.759939 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:49:31.759946 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:49:31.759952 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:49:31.759959 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:49:31.759966 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:49:31.759973 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:49:31.759980 | orchestrator | 2026-03-02 00:49:31.759987 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-02 00:49:31.759993 | orchestrator | Monday 02 March 2026 00:48:33 +0000 (0:00:00.729) 0:00:09.384 ********** 2026-03-02 00:49:31.760016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760075 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760098 | orchestrator | 2026-03-02 00:49:31.760103 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-02 00:49:31.760107 | orchestrator | Monday 02 March 2026 00:48:35 +0000 (0:00:01.969) 0:00:11.354 ********** 2026-03-02 00:49:31.760111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760139 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760173 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760195 | orchestrator | 2026-03-02 00:49:31.760199 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-02 00:49:31.760203 | orchestrator | Monday 02 March 2026 00:48:40 +0000 (0:00:04.612) 0:00:15.967 ********** 2026-03-02 00:49:31.760208 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:49:31.760212 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:49:31.760216 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:49:31.760220 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:49:31.760224 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:49:31.760228 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:49:31.760232 | orchestrator | 2026-03-02 00:49:31.760236 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-02 00:49:31.760240 | orchestrator | Monday 02 March 2026 00:48:41 +0000 (0:00:01.274) 0:00:17.242 ********** 2026-03-02 00:49:31.760244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760304 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760320 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-02 00:49:31.760325 | orchestrator | 2026-03-02 00:49:31.760329 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-02 00:49:31.760333 | orchestrator | Monday 02 March 2026 00:48:43 +0000 (0:00:02.083) 0:00:19.325 ********** 2026-03-02 00:49:31.760337 | orchestrator | 2026-03-02 00:49:31.760342 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-02 00:49:31.760346 | orchestrator | Monday 02 March 2026 00:48:43 +0000 (0:00:00.258) 0:00:19.583 ********** 2026-03-02 00:49:31.760350 | orchestrator | 2026-03-02 00:49:31.760354 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-02 00:49:31.760358 | orchestrator | Monday 02 March 2026 00:48:44 +0000 (0:00:00.125) 0:00:19.709 ********** 2026-03-02 00:49:31.760362 | orchestrator | 2026-03-02 00:49:31.760366 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-02 00:49:31.760370 | orchestrator | Monday 02 March 2026 00:48:44 +0000 (0:00:00.130) 0:00:19.839 ********** 2026-03-02 00:49:31.760374 | orchestrator | 2026-03-02 00:49:31.760378 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-02 00:49:31.760382 | orchestrator | Monday 02 March 2026 00:48:44 +0000 (0:00:00.225) 0:00:20.065 ********** 2026-03-02 00:49:31.760386 | orchestrator | 2026-03-02 00:49:31.760410 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-02 00:49:31.760415 | orchestrator | Monday 02 March 2026 00:48:44 +0000 (0:00:00.291) 0:00:20.357 ********** 2026-03-02 00:49:31.760419 | orchestrator | 2026-03-02 00:49:31.760423 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-02 00:49:31.760427 | orchestrator | Monday 02 March 2026 00:48:44 +0000 (0:00:00.147) 0:00:20.504 ********** 2026-03-02 00:49:31.760432 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:49:31.760437 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:49:31.760444 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:49:31.760451 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:49:31.760457 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:49:31.760464 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:49:31.760470 | orchestrator | 2026-03-02 00:49:31.760476 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-02 00:49:31.760483 | orchestrator | Monday 02 March 2026 00:48:54 +0000 (0:00:09.578) 0:00:30.083 ********** 2026-03-02 00:49:31.760489 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:49:31.760502 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:49:31.760509 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:49:31.760516 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:49:31.760522 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:49:31.760529 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:49:31.760535 | orchestrator | 2026-03-02 00:49:31.760539 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-02 00:49:31.760547 | orchestrator | Monday 02 March 2026 00:48:55 +0000 (0:00:01.358) 0:00:31.441 ********** 2026-03-02 00:49:31.760551 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:49:31.760555 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:49:31.760559 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:49:31.760566 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:49:31.760572 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:49:31.760579 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:49:31.760585 | orchestrator | 2026-03-02 00:49:31.760592 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-02 00:49:31.760599 | orchestrator | Monday 02 March 2026 00:49:06 +0000 (0:00:11.153) 0:00:42.595 ********** 2026-03-02 00:49:31.760605 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-02 00:49:31.760611 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-02 00:49:31.760617 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-02 00:49:31.760624 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-02 00:49:31.760631 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-02 00:49:31.760641 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-02 00:49:31.760647 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-02 00:49:31.760654 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-02 00:49:31.760661 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-02 00:49:31.760669 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-02 00:49:31.760676 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-02 00:49:31.760683 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-02 00:49:31.760690 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-02 00:49:31.760697 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-02 00:49:31.760704 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-02 00:49:31.760711 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-02 00:49:31.760718 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-02 00:49:31.760724 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-02 00:49:31.760731 | orchestrator | 2026-03-02 00:49:31.760738 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-02 00:49:31.760744 | orchestrator | Monday 02 March 2026 00:49:15 +0000 (0:00:08.738) 0:00:51.333 ********** 2026-03-02 00:49:31.760757 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-02 00:49:31.760764 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:49:31.760771 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-02 00:49:31.760779 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:49:31.760786 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-02 00:49:31.760793 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:49:31.760799 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-02 00:49:31.760806 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-02 00:49:31.760814 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-02 00:49:31.760820 | orchestrator | 2026-03-02 00:49:31.760827 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-02 00:49:31.760835 | orchestrator | Monday 02 March 2026 00:49:18 +0000 (0:00:02.482) 0:00:53.816 ********** 2026-03-02 00:49:31.760841 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-02 00:49:31.760848 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:49:31.760855 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-02 00:49:31.760861 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:49:31.760865 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-02 00:49:31.760869 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:49:31.760874 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-02 00:49:31.760878 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-02 00:49:31.760882 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-02 00:49:31.760886 | orchestrator | 2026-03-02 00:49:31.760890 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-02 00:49:31.760894 | orchestrator | Monday 02 March 2026 00:49:21 +0000 (0:00:02.914) 0:00:56.731 ********** 2026-03-02 00:49:31.760898 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:49:31.760902 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:49:31.760911 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:49:31.760915 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:49:31.760919 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:49:31.760923 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:49:31.760927 | orchestrator | 2026-03-02 00:49:31.760931 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:49:31.760935 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-02 00:49:31.760940 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-02 00:49:31.760944 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-02 00:49:31.760949 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-02 00:49:31.760953 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-02 00:49:31.760961 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-02 00:49:31.760965 | orchestrator | 2026-03-02 00:49:31.760969 | orchestrator | 2026-03-02 00:49:31.760973 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:49:31.760977 | orchestrator | Monday 02 March 2026 00:49:29 +0000 (0:00:08.746) 0:01:05.477 ********** 2026-03-02 00:49:31.760981 | orchestrator | =============================================================================== 2026-03-02 00:49:31.760990 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.90s 2026-03-02 00:49:31.760994 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.58s 2026-03-02 00:49:31.760998 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.74s 2026-03-02 00:49:31.761002 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.61s 2026-03-02 00:49:31.761006 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 2.91s 2026-03-02 00:49:31.761010 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.48s 2026-03-02 00:49:31.761014 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.10s 2026-03-02 00:49:31.761018 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.08s 2026-03-02 00:49:31.761022 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.97s 2026-03-02 00:49:31.761026 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.42s 2026-03-02 00:49:31.761031 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.36s 2026-03-02 00:49:31.761035 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.28s 2026-03-02 00:49:31.761039 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.21s 2026-03-02 00:49:31.761043 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.18s 2026-03-02 00:49:31.761047 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.18s 2026-03-02 00:49:31.761051 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.18s 2026-03-02 00:49:31.761055 | orchestrator | module-load : Load modules ---------------------------------------------- 1.12s 2026-03-02 00:49:31.761059 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.73s 2026-03-02 00:49:31.761063 | orchestrator | 2026-03-02 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:34.788991 | orchestrator | 2026-03-02 00:49:34 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:34.789523 | orchestrator | 2026-03-02 00:49:34 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:34.790190 | orchestrator | 2026-03-02 00:49:34 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:34.791882 | orchestrator | 2026-03-02 00:49:34 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:34.792462 | orchestrator | 2026-03-02 00:49:34 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:49:34.792506 | orchestrator | 2026-03-02 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:37.830932 | orchestrator | 2026-03-02 00:49:37 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:37.834182 | orchestrator | 2026-03-02 00:49:37 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:37.834699 | orchestrator | 2026-03-02 00:49:37 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:37.836790 | orchestrator | 2026-03-02 00:49:37 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:37.837196 | orchestrator | 2026-03-02 00:49:37 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:49:37.837278 | orchestrator | 2026-03-02 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:40.867280 | orchestrator | 2026-03-02 00:49:40 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:40.869766 | orchestrator | 2026-03-02 00:49:40 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:40.872888 | orchestrator | 2026-03-02 00:49:40 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:40.874122 | orchestrator | 2026-03-02 00:49:40 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:40.875317 | orchestrator | 2026-03-02 00:49:40 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:49:40.875451 | orchestrator | 2026-03-02 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:43.921877 | orchestrator | 2026-03-02 00:49:43 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:43.922430 | orchestrator | 2026-03-02 00:49:43 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:43.923219 | orchestrator | 2026-03-02 00:49:43 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:43.924078 | orchestrator | 2026-03-02 00:49:43 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:43.925475 | orchestrator | 2026-03-02 00:49:43 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:49:43.925523 | orchestrator | 2026-03-02 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:46.958188 | orchestrator | 2026-03-02 00:49:46 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:46.958487 | orchestrator | 2026-03-02 00:49:46 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:46.959150 | orchestrator | 2026-03-02 00:49:46 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:46.959869 | orchestrator | 2026-03-02 00:49:46 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:46.960466 | orchestrator | 2026-03-02 00:49:46 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:49:46.960538 | orchestrator | 2026-03-02 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:50.003131 | orchestrator | 2026-03-02 00:49:50 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:50.006516 | orchestrator | 2026-03-02 00:49:50 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:50.008795 | orchestrator | 2026-03-02 00:49:50 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:50.010896 | orchestrator | 2026-03-02 00:49:50 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:50.012377 | orchestrator | 2026-03-02 00:49:50 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:49:50.012438 | orchestrator | 2026-03-02 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:53.070333 | orchestrator | 2026-03-02 00:49:53 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:53.070457 | orchestrator | 2026-03-02 00:49:53 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:53.070525 | orchestrator | 2026-03-02 00:49:53 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:53.071322 | orchestrator | 2026-03-02 00:49:53 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:53.072452 | orchestrator | 2026-03-02 00:49:53 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:49:53.072496 | orchestrator | 2026-03-02 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:56.098959 | orchestrator | 2026-03-02 00:49:56 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:56.099060 | orchestrator | 2026-03-02 00:49:56 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:56.099083 | orchestrator | 2026-03-02 00:49:56 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:56.099572 | orchestrator | 2026-03-02 00:49:56 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:56.102300 | orchestrator | 2026-03-02 00:49:56 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:49:56.102357 | orchestrator | 2026-03-02 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:49:59.138620 | orchestrator | 2026-03-02 00:49:59 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:49:59.141501 | orchestrator | 2026-03-02 00:49:59 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:49:59.141911 | orchestrator | 2026-03-02 00:49:59 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:49:59.142818 | orchestrator | 2026-03-02 00:49:59 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:49:59.143826 | orchestrator | 2026-03-02 00:49:59 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:49:59.143872 | orchestrator | 2026-03-02 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:02.184061 | orchestrator | 2026-03-02 00:50:02 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:02.184764 | orchestrator | 2026-03-02 00:50:02 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:02.186384 | orchestrator | 2026-03-02 00:50:02 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:02.187798 | orchestrator | 2026-03-02 00:50:02 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:50:02.189219 | orchestrator | 2026-03-02 00:50:02 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:02.189262 | orchestrator | 2026-03-02 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:05.216605 | orchestrator | 2026-03-02 00:50:05 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:05.217161 | orchestrator | 2026-03-02 00:50:05 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:05.219455 | orchestrator | 2026-03-02 00:50:05 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:05.220370 | orchestrator | 2026-03-02 00:50:05 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:50:05.221492 | orchestrator | 2026-03-02 00:50:05 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:05.221513 | orchestrator | 2026-03-02 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:08.265739 | orchestrator | 2026-03-02 00:50:08 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:08.268363 | orchestrator | 2026-03-02 00:50:08 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:08.270469 | orchestrator | 2026-03-02 00:50:08 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:08.271750 | orchestrator | 2026-03-02 00:50:08 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:50:08.274386 | orchestrator | 2026-03-02 00:50:08 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:08.274513 | orchestrator | 2026-03-02 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:11.308851 | orchestrator | 2026-03-02 00:50:11 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:11.310469 | orchestrator | 2026-03-02 00:50:11 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:11.311057 | orchestrator | 2026-03-02 00:50:11 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:11.311684 | orchestrator | 2026-03-02 00:50:11 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:50:11.312636 | orchestrator | 2026-03-02 00:50:11 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:11.313045 | orchestrator | 2026-03-02 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:14.342325 | orchestrator | 2026-03-02 00:50:14 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:14.343024 | orchestrator | 2026-03-02 00:50:14 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:14.344501 | orchestrator | 2026-03-02 00:50:14 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:14.344999 | orchestrator | 2026-03-02 00:50:14 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:50:14.347836 | orchestrator | 2026-03-02 00:50:14 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:14.347885 | orchestrator | 2026-03-02 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:17.392819 | orchestrator | 2026-03-02 00:50:17 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:17.394245 | orchestrator | 2026-03-02 00:50:17 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:17.395095 | orchestrator | 2026-03-02 00:50:17 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:17.397245 | orchestrator | 2026-03-02 00:50:17 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:50:17.399975 | orchestrator | 2026-03-02 00:50:17 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:17.400041 | orchestrator | 2026-03-02 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:20.444142 | orchestrator | 2026-03-02 00:50:20 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:20.445532 | orchestrator | 2026-03-02 00:50:20 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:20.447549 | orchestrator | 2026-03-02 00:50:20 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:20.449153 | orchestrator | 2026-03-02 00:50:20 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:50:20.450817 | orchestrator | 2026-03-02 00:50:20 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:20.450889 | orchestrator | 2026-03-02 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:23.489860 | orchestrator | 2026-03-02 00:50:23 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:23.490481 | orchestrator | 2026-03-02 00:50:23 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:23.491534 | orchestrator | 2026-03-02 00:50:23 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:23.492536 | orchestrator | 2026-03-02 00:50:23 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:50:23.493726 | orchestrator | 2026-03-02 00:50:23 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:23.493765 | orchestrator | 2026-03-02 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:26.544020 | orchestrator | 2026-03-02 00:50:26 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:26.544110 | orchestrator | 2026-03-02 00:50:26 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:26.544119 | orchestrator | 2026-03-02 00:50:26 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:26.544124 | orchestrator | 2026-03-02 00:50:26 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:50:26.544128 | orchestrator | 2026-03-02 00:50:26 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:26.544132 | orchestrator | 2026-03-02 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:29.570960 | orchestrator | 2026-03-02 00:50:29 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:29.571107 | orchestrator | 2026-03-02 00:50:29 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:29.573206 | orchestrator | 2026-03-02 00:50:29 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:29.573254 | orchestrator | 2026-03-02 00:50:29 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:50:29.573261 | orchestrator | 2026-03-02 00:50:29 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:29.573269 | orchestrator | 2026-03-02 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:32.674763 | orchestrator | 2026-03-02 00:50:32 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:32.677005 | orchestrator | 2026-03-02 00:50:32 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:32.677246 | orchestrator | 2026-03-02 00:50:32 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:32.677916 | orchestrator | 2026-03-02 00:50:32 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state STARTED 2026-03-02 00:50:32.678346 | orchestrator | 2026-03-02 00:50:32 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:32.678381 | orchestrator | 2026-03-02 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:35.741041 | orchestrator | 2026-03-02 00:50:35 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:35.741129 | orchestrator | 2026-03-02 00:50:35 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:35.741658 | orchestrator | 2026-03-02 00:50:35 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:35.743265 | orchestrator | 2026-03-02 00:50:35 | INFO  | Task 5c84fca1-c3b2-49c7-9763-5e1c5a66eed3 is in state SUCCESS 2026-03-02 00:50:35.747139 | orchestrator | 2026-03-02 00:50:35.747202 | orchestrator | 2026-03-02 00:50:35.747213 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-02 00:50:35.747222 | orchestrator | 2026-03-02 00:50:35.747231 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-02 00:50:35.747240 | orchestrator | Monday 02 March 2026 00:46:09 +0000 (0:00:00.172) 0:00:00.172 ********** 2026-03-02 00:50:35.747272 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:50:35.747295 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:50:35.747303 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:50:35.747311 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.747319 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.747327 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.747335 | orchestrator | 2026-03-02 00:50:35.747343 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-02 00:50:35.747351 | orchestrator | Monday 02 March 2026 00:46:10 +0000 (0:00:00.606) 0:00:00.778 ********** 2026-03-02 00:50:35.747359 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.747367 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.747375 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.747383 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.747449 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.747463 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.747472 | orchestrator | 2026-03-02 00:50:35.747480 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-02 00:50:35.747488 | orchestrator | Monday 02 March 2026 00:46:11 +0000 (0:00:00.636) 0:00:01.415 ********** 2026-03-02 00:50:35.747496 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.747504 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.747512 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.747519 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.747527 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.747535 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.747543 | orchestrator | 2026-03-02 00:50:35.747551 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-02 00:50:35.747559 | orchestrator | Monday 02 March 2026 00:46:11 +0000 (0:00:00.645) 0:00:02.061 ********** 2026-03-02 00:50:35.747567 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:50:35.747574 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:50:35.747582 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:50:35.747590 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.747598 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.747605 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.747613 | orchestrator | 2026-03-02 00:50:35.747621 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-02 00:50:35.747629 | orchestrator | Monday 02 March 2026 00:46:13 +0000 (0:00:01.712) 0:00:03.773 ********** 2026-03-02 00:50:35.747636 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:50:35.747644 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:50:35.747652 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:50:35.747659 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.747667 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.747676 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.747685 | orchestrator | 2026-03-02 00:50:35.747694 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-02 00:50:35.747703 | orchestrator | Monday 02 March 2026 00:46:15 +0000 (0:00:01.996) 0:00:05.769 ********** 2026-03-02 00:50:35.747713 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:50:35.747722 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:50:35.747731 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:50:35.747740 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.747750 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.747758 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.747768 | orchestrator | 2026-03-02 00:50:35.747778 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-02 00:50:35.747787 | orchestrator | Monday 02 March 2026 00:46:16 +0000 (0:00:00.927) 0:00:06.697 ********** 2026-03-02 00:50:35.747796 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.747805 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.747814 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.747824 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.747840 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.747850 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.747859 | orchestrator | 2026-03-02 00:50:35.747868 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-02 00:50:35.747877 | orchestrator | Monday 02 March 2026 00:46:17 +0000 (0:00:00.772) 0:00:07.470 ********** 2026-03-02 00:50:35.747887 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.747896 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.747905 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.747915 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.747945 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.747958 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.747971 | orchestrator | 2026-03-02 00:50:35.747985 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-02 00:50:35.747998 | orchestrator | Monday 02 March 2026 00:46:17 +0000 (0:00:00.701) 0:00:08.171 ********** 2026-03-02 00:50:35.748010 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-02 00:50:35.748019 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-02 00:50:35.748029 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.748038 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-02 00:50:35.748047 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-02 00:50:35.748056 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.748065 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-02 00:50:35.748074 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-02 00:50:35.748083 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.748093 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-02 00:50:35.748117 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-02 00:50:35.748127 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.748135 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-02 00:50:35.748143 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-02 00:50:35.748151 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.748158 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-02 00:50:35.748166 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-02 00:50:35.748174 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.748182 | orchestrator | 2026-03-02 00:50:35.748189 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-02 00:50:35.748197 | orchestrator | Monday 02 March 2026 00:46:18 +0000 (0:00:00.785) 0:00:08.956 ********** 2026-03-02 00:50:35.748205 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.748212 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.748220 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.748228 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.748235 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.748243 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.748251 | orchestrator | 2026-03-02 00:50:35.748258 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-02 00:50:35.748267 | orchestrator | Monday 02 March 2026 00:46:19 +0000 (0:00:01.258) 0:00:10.215 ********** 2026-03-02 00:50:35.748275 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:50:35.748283 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:50:35.748291 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:50:35.748298 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.748306 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.748321 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.748329 | orchestrator | 2026-03-02 00:50:35.748337 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-02 00:50:35.748345 | orchestrator | Monday 02 March 2026 00:46:20 +0000 (0:00:00.747) 0:00:10.963 ********** 2026-03-02 00:50:35.748353 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.748360 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:50:35.748368 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.748376 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:50:35.748383 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.748416 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:50:35.748431 | orchestrator | 2026-03-02 00:50:35.748439 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-02 00:50:35.748447 | orchestrator | Monday 02 March 2026 00:46:26 +0000 (0:00:05.595) 0:00:16.559 ********** 2026-03-02 00:50:35.748455 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.748462 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.748470 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.748478 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.748485 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.748493 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.748501 | orchestrator | 2026-03-02 00:50:35.748508 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-02 00:50:35.748516 | orchestrator | Monday 02 March 2026 00:46:27 +0000 (0:00:01.499) 0:00:18.058 ********** 2026-03-02 00:50:35.748524 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.748531 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.748539 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.748547 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.748554 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.748562 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.748569 | orchestrator | 2026-03-02 00:50:35.748577 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-02 00:50:35.748587 | orchestrator | Monday 02 March 2026 00:46:29 +0000 (0:00:01.597) 0:00:19.656 ********** 2026-03-02 00:50:35.748595 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.748602 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.748610 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.748618 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.748625 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.748633 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.748640 | orchestrator | 2026-03-02 00:50:35.748648 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-02 00:50:35.748656 | orchestrator | Monday 02 March 2026 00:46:30 +0000 (0:00:01.097) 0:00:20.753 ********** 2026-03-02 00:50:35.748663 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-02 00:50:35.748678 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-02 00:50:35.748686 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.748693 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-02 00:50:35.748701 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-02 00:50:35.748709 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.748716 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-02 00:50:35.748724 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-02 00:50:35.748731 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.748739 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-02 00:50:35.748747 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-02 00:50:35.748754 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.748762 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-02 00:50:35.748770 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-02 00:50:35.748784 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.748791 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-02 00:50:35.748799 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-02 00:50:35.748807 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.748814 | orchestrator | 2026-03-02 00:50:35.748822 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-02 00:50:35.748835 | orchestrator | Monday 02 March 2026 00:46:31 +0000 (0:00:01.266) 0:00:22.020 ********** 2026-03-02 00:50:35.748843 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.748851 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.748859 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.748866 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.748874 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.748882 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.748889 | orchestrator | 2026-03-02 00:50:35.748897 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-02 00:50:35.748905 | orchestrator | Monday 02 March 2026 00:46:32 +0000 (0:00:01.153) 0:00:23.174 ********** 2026-03-02 00:50:35.748912 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.748920 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.748928 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.748935 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.748943 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.748950 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.748958 | orchestrator | 2026-03-02 00:50:35.748966 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-02 00:50:35.748973 | orchestrator | 2026-03-02 00:50:35.748981 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-02 00:50:35.748989 | orchestrator | Monday 02 March 2026 00:46:34 +0000 (0:00:01.834) 0:00:25.008 ********** 2026-03-02 00:50:35.748997 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.749004 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.749012 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.749020 | orchestrator | 2026-03-02 00:50:35.749027 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-02 00:50:35.749035 | orchestrator | Monday 02 March 2026 00:46:36 +0000 (0:00:01.996) 0:00:27.005 ********** 2026-03-02 00:50:35.749043 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.749050 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.749058 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.749066 | orchestrator | 2026-03-02 00:50:35.749073 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-02 00:50:35.749081 | orchestrator | Monday 02 March 2026 00:46:37 +0000 (0:00:01.379) 0:00:28.384 ********** 2026-03-02 00:50:35.749089 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.749096 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.749104 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.749111 | orchestrator | 2026-03-02 00:50:35.749119 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-02 00:50:35.749127 | orchestrator | Monday 02 March 2026 00:46:38 +0000 (0:00:00.960) 0:00:29.345 ********** 2026-03-02 00:50:35.749134 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.749142 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.749150 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.749157 | orchestrator | 2026-03-02 00:50:35.749165 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-02 00:50:35.749173 | orchestrator | Monday 02 March 2026 00:46:39 +0000 (0:00:00.948) 0:00:30.293 ********** 2026-03-02 00:50:35.749181 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.749188 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.749196 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.749204 | orchestrator | 2026-03-02 00:50:35.749211 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-02 00:50:35.749225 | orchestrator | Monday 02 March 2026 00:46:40 +0000 (0:00:00.483) 0:00:30.777 ********** 2026-03-02 00:50:35.749232 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.749240 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.749248 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.749255 | orchestrator | 2026-03-02 00:50:35.749263 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-02 00:50:35.749271 | orchestrator | Monday 02 March 2026 00:46:41 +0000 (0:00:01.512) 0:00:32.290 ********** 2026-03-02 00:50:35.749278 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.749286 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.749294 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.749304 | orchestrator | 2026-03-02 00:50:35.749316 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-02 00:50:35.749329 | orchestrator | Monday 02 March 2026 00:46:43 +0000 (0:00:01.593) 0:00:33.884 ********** 2026-03-02 00:50:35.749342 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:50:35.749356 | orchestrator | 2026-03-02 00:50:35.749367 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-02 00:50:35.749381 | orchestrator | Monday 02 March 2026 00:46:43 +0000 (0:00:00.429) 0:00:34.313 ********** 2026-03-02 00:50:35.749481 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.749506 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.749520 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.749533 | orchestrator | 2026-03-02 00:50:35.749547 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-02 00:50:35.749562 | orchestrator | Monday 02 March 2026 00:46:47 +0000 (0:00:03.270) 0:00:37.583 ********** 2026-03-02 00:50:35.749572 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.749580 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.749588 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.749596 | orchestrator | 2026-03-02 00:50:35.749604 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-02 00:50:35.749612 | orchestrator | Monday 02 March 2026 00:46:47 +0000 (0:00:00.709) 0:00:38.292 ********** 2026-03-02 00:50:35.749619 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.749627 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.749635 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.749643 | orchestrator | 2026-03-02 00:50:35.749651 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-02 00:50:35.749659 | orchestrator | Monday 02 March 2026 00:46:48 +0000 (0:00:01.000) 0:00:39.293 ********** 2026-03-02 00:50:35.749667 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.749674 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.749682 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.749690 | orchestrator | 2026-03-02 00:50:35.749698 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-02 00:50:35.749713 | orchestrator | Monday 02 March 2026 00:46:50 +0000 (0:00:01.154) 0:00:40.447 ********** 2026-03-02 00:50:35.749721 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.749729 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.749737 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.749744 | orchestrator | 2026-03-02 00:50:35.749752 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-02 00:50:35.749760 | orchestrator | Monday 02 March 2026 00:46:50 +0000 (0:00:00.835) 0:00:41.283 ********** 2026-03-02 00:50:35.749768 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.749775 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.749783 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.749791 | orchestrator | 2026-03-02 00:50:35.749799 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-02 00:50:35.749807 | orchestrator | Monday 02 March 2026 00:46:51 +0000 (0:00:00.477) 0:00:41.760 ********** 2026-03-02 00:50:35.749823 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.749830 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.749838 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.749846 | orchestrator | 2026-03-02 00:50:35.749854 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-02 00:50:35.749861 | orchestrator | Monday 02 March 2026 00:46:53 +0000 (0:00:01.941) 0:00:43.701 ********** 2026-03-02 00:50:35.749869 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.749877 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.749885 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.749892 | orchestrator | 2026-03-02 00:50:35.749900 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-02 00:50:35.749908 | orchestrator | Monday 02 March 2026 00:46:55 +0000 (0:00:02.512) 0:00:46.214 ********** 2026-03-02 00:50:35.749916 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.749924 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.749931 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.749939 | orchestrator | 2026-03-02 00:50:35.749947 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-02 00:50:35.749955 | orchestrator | Monday 02 March 2026 00:46:56 +0000 (0:00:00.828) 0:00:47.042 ********** 2026-03-02 00:50:35.749963 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-02 00:50:35.749972 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-02 00:50:35.749979 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-02 00:50:35.749988 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-02 00:50:35.749995 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-02 00:50:35.750003 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-02 00:50:35.750011 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-02 00:50:35.750071 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-02 00:50:35.750079 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-02 00:50:35.750087 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-02 00:50:35.750095 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-02 00:50:35.750108 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-02 00:50:35.750116 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-02 00:50:35.750124 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-02 00:50:35.750132 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-02 00:50:35.750140 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.750148 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.750156 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.750169 | orchestrator | 2026-03-02 00:50:35.750177 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-02 00:50:35.750185 | orchestrator | Monday 02 March 2026 00:47:50 +0000 (0:00:54.051) 0:01:41.093 ********** 2026-03-02 00:50:35.750193 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.750201 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.750208 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.750216 | orchestrator | 2026-03-02 00:50:35.750224 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-02 00:50:35.750237 | orchestrator | Monday 02 March 2026 00:47:50 +0000 (0:00:00.283) 0:01:41.377 ********** 2026-03-02 00:50:35.750245 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.750253 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.750261 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.750269 | orchestrator | 2026-03-02 00:50:35.750277 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-02 00:50:35.750285 | orchestrator | Monday 02 March 2026 00:47:51 +0000 (0:00:00.852) 0:01:42.230 ********** 2026-03-02 00:50:35.750292 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.750300 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.750308 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.750321 | orchestrator | 2026-03-02 00:50:35.750334 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-02 00:50:35.750347 | orchestrator | Monday 02 March 2026 00:47:52 +0000 (0:00:01.148) 0:01:43.378 ********** 2026-03-02 00:50:35.750360 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.750371 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.750382 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.750455 | orchestrator | 2026-03-02 00:50:35.750470 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-02 00:50:35.750483 | orchestrator | Monday 02 March 2026 00:48:18 +0000 (0:00:25.413) 0:02:08.792 ********** 2026-03-02 00:50:35.750496 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.750508 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.750520 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.750533 | orchestrator | 2026-03-02 00:50:35.750547 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-02 00:50:35.750560 | orchestrator | Monday 02 March 2026 00:48:19 +0000 (0:00:00.713) 0:02:09.506 ********** 2026-03-02 00:50:35.750573 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.750586 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.750599 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.750612 | orchestrator | 2026-03-02 00:50:35.750625 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-02 00:50:35.750634 | orchestrator | Monday 02 March 2026 00:48:19 +0000 (0:00:00.688) 0:02:10.195 ********** 2026-03-02 00:50:35.750642 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.750649 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.750657 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.750665 | orchestrator | 2026-03-02 00:50:35.750672 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-02 00:50:35.750680 | orchestrator | Monday 02 March 2026 00:48:20 +0000 (0:00:00.633) 0:02:10.828 ********** 2026-03-02 00:50:35.750688 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.750696 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.750703 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.750711 | orchestrator | 2026-03-02 00:50:35.750719 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-02 00:50:35.750726 | orchestrator | Monday 02 March 2026 00:48:21 +0000 (0:00:00.991) 0:02:11.820 ********** 2026-03-02 00:50:35.750734 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.750742 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.750750 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.750757 | orchestrator | 2026-03-02 00:50:35.750765 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-02 00:50:35.750794 | orchestrator | Monday 02 March 2026 00:48:21 +0000 (0:00:00.264) 0:02:12.084 ********** 2026-03-02 00:50:35.750808 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.750822 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.750834 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.750847 | orchestrator | 2026-03-02 00:50:35.750861 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-02 00:50:35.750874 | orchestrator | Monday 02 March 2026 00:48:22 +0000 (0:00:00.691) 0:02:12.775 ********** 2026-03-02 00:50:35.750888 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.750901 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.750914 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.750925 | orchestrator | 2026-03-02 00:50:35.750933 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-02 00:50:35.750941 | orchestrator | Monday 02 March 2026 00:48:23 +0000 (0:00:00.667) 0:02:13.443 ********** 2026-03-02 00:50:35.750949 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.750957 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.750965 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.750972 | orchestrator | 2026-03-02 00:50:35.750980 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-02 00:50:35.750988 | orchestrator | Monday 02 March 2026 00:48:24 +0000 (0:00:01.079) 0:02:14.522 ********** 2026-03-02 00:50:35.750996 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:50:35.751004 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:50:35.751011 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:50:35.751019 | orchestrator | 2026-03-02 00:50:35.751027 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-02 00:50:35.751035 | orchestrator | Monday 02 March 2026 00:48:24 +0000 (0:00:00.812) 0:02:15.334 ********** 2026-03-02 00:50:35.751043 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.751051 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.751059 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.751067 | orchestrator | 2026-03-02 00:50:35.751075 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-02 00:50:35.751083 | orchestrator | Monday 02 March 2026 00:48:25 +0000 (0:00:00.247) 0:02:15.581 ********** 2026-03-02 00:50:35.751091 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.751099 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.751106 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.751114 | orchestrator | 2026-03-02 00:50:35.751122 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-02 00:50:35.751732 | orchestrator | Monday 02 March 2026 00:48:25 +0000 (0:00:00.249) 0:02:15.830 ********** 2026-03-02 00:50:35.751759 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.751768 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.751776 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.751784 | orchestrator | 2026-03-02 00:50:35.751792 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-02 00:50:35.751801 | orchestrator | Monday 02 March 2026 00:48:26 +0000 (0:00:00.871) 0:02:16.702 ********** 2026-03-02 00:50:35.751809 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.751828 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.751836 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.751844 | orchestrator | 2026-03-02 00:50:35.751853 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-02 00:50:35.751862 | orchestrator | Monday 02 March 2026 00:48:27 +0000 (0:00:00.712) 0:02:17.415 ********** 2026-03-02 00:50:35.751870 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-02 00:50:35.751879 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-02 00:50:35.751886 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-02 00:50:35.751906 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-02 00:50:35.751914 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-02 00:50:35.751927 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-02 00:50:35.751935 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-02 00:50:35.751943 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-02 00:50:35.751951 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-02 00:50:35.751959 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-02 00:50:35.751967 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-02 00:50:35.751975 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-02 00:50:35.751983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-02 00:50:35.751991 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-02 00:50:35.751999 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-02 00:50:35.752006 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-02 00:50:35.752015 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-02 00:50:35.752022 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-02 00:50:35.752030 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-02 00:50:35.752038 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-02 00:50:35.752046 | orchestrator | 2026-03-02 00:50:35.752053 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-02 00:50:35.752061 | orchestrator | 2026-03-02 00:50:35.752069 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-02 00:50:35.752077 | orchestrator | Monday 02 March 2026 00:48:30 +0000 (0:00:03.284) 0:02:20.699 ********** 2026-03-02 00:50:35.752085 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:50:35.752093 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:50:35.752101 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:50:35.752108 | orchestrator | 2026-03-02 00:50:35.752116 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-02 00:50:35.752124 | orchestrator | Monday 02 March 2026 00:48:30 +0000 (0:00:00.478) 0:02:21.177 ********** 2026-03-02 00:50:35.752132 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:50:35.752140 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:50:35.752147 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:50:35.752155 | orchestrator | 2026-03-02 00:50:35.752163 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-02 00:50:35.752171 | orchestrator | Monday 02 March 2026 00:48:32 +0000 (0:00:01.620) 0:02:22.798 ********** 2026-03-02 00:50:35.752178 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:50:35.752186 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:50:35.752194 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:50:35.752201 | orchestrator | 2026-03-02 00:50:35.752209 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-02 00:50:35.752217 | orchestrator | Monday 02 March 2026 00:48:32 +0000 (0:00:00.309) 0:02:23.107 ********** 2026-03-02 00:50:35.752225 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:50:35.752239 | orchestrator | 2026-03-02 00:50:35.752247 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-02 00:50:35.752255 | orchestrator | Monday 02 March 2026 00:48:33 +0000 (0:00:00.619) 0:02:23.727 ********** 2026-03-02 00:50:35.752262 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.752270 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.752278 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.752286 | orchestrator | 2026-03-02 00:50:35.752294 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-02 00:50:35.752302 | orchestrator | Monday 02 March 2026 00:48:33 +0000 (0:00:00.349) 0:02:24.076 ********** 2026-03-02 00:50:35.752309 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.752317 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.752325 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.752333 | orchestrator | 2026-03-02 00:50:35.752341 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-02 00:50:35.752355 | orchestrator | Monday 02 March 2026 00:48:33 +0000 (0:00:00.309) 0:02:24.386 ********** 2026-03-02 00:50:35.752363 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.752371 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.752379 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.752387 | orchestrator | 2026-03-02 00:50:35.752453 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-02 00:50:35.752462 | orchestrator | Monday 02 March 2026 00:48:34 +0000 (0:00:00.333) 0:02:24.720 ********** 2026-03-02 00:50:35.752470 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:50:35.752478 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:50:35.752485 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:50:35.752493 | orchestrator | 2026-03-02 00:50:35.752501 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-02 00:50:35.752509 | orchestrator | Monday 02 March 2026 00:48:35 +0000 (0:00:00.920) 0:02:25.640 ********** 2026-03-02 00:50:35.752517 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:50:35.752525 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:50:35.752532 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:50:35.752540 | orchestrator | 2026-03-02 00:50:35.752548 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-02 00:50:35.752561 | orchestrator | Monday 02 March 2026 00:48:36 +0000 (0:00:01.145) 0:02:26.786 ********** 2026-03-02 00:50:35.752569 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:50:35.752577 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:50:35.752585 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:50:35.752593 | orchestrator | 2026-03-02 00:50:35.752600 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-02 00:50:35.752608 | orchestrator | Monday 02 March 2026 00:48:38 +0000 (0:00:01.929) 0:02:28.716 ********** 2026-03-02 00:50:35.752616 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:50:35.752624 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:50:35.752632 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:50:35.752639 | orchestrator | 2026-03-02 00:50:35.752647 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-02 00:50:35.752655 | orchestrator | 2026-03-02 00:50:35.752663 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-02 00:50:35.752671 | orchestrator | Monday 02 March 2026 00:48:48 +0000 (0:00:09.749) 0:02:38.465 ********** 2026-03-02 00:50:35.752678 | orchestrator | ok: [testbed-manager] 2026-03-02 00:50:35.752686 | orchestrator | 2026-03-02 00:50:35.752694 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-02 00:50:35.752702 | orchestrator | Monday 02 March 2026 00:48:48 +0000 (0:00:00.676) 0:02:39.141 ********** 2026-03-02 00:50:35.752710 | orchestrator | changed: [testbed-manager] 2026-03-02 00:50:35.752718 | orchestrator | 2026-03-02 00:50:35.752725 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-02 00:50:35.752733 | orchestrator | Monday 02 March 2026 00:48:49 +0000 (0:00:00.353) 0:02:39.495 ********** 2026-03-02 00:50:35.752748 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-02 00:50:35.752756 | orchestrator | 2026-03-02 00:50:35.752764 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-02 00:50:35.752771 | orchestrator | Monday 02 March 2026 00:48:49 +0000 (0:00:00.467) 0:02:39.962 ********** 2026-03-02 00:50:35.752779 | orchestrator | changed: [testbed-manager] 2026-03-02 00:50:35.752787 | orchestrator | 2026-03-02 00:50:35.752795 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-02 00:50:35.752803 | orchestrator | Monday 02 March 2026 00:48:50 +0000 (0:00:00.719) 0:02:40.682 ********** 2026-03-02 00:50:35.752810 | orchestrator | changed: [testbed-manager] 2026-03-02 00:50:35.752818 | orchestrator | 2026-03-02 00:50:35.752826 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-02 00:50:35.752834 | orchestrator | Monday 02 March 2026 00:48:50 +0000 (0:00:00.472) 0:02:41.154 ********** 2026-03-02 00:50:35.752841 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-02 00:50:35.752849 | orchestrator | 2026-03-02 00:50:35.752857 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-02 00:50:35.752865 | orchestrator | Monday 02 March 2026 00:48:52 +0000 (0:00:01.514) 0:02:42.668 ********** 2026-03-02 00:50:35.752873 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-02 00:50:35.752881 | orchestrator | 2026-03-02 00:50:35.752889 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-02 00:50:35.752897 | orchestrator | Monday 02 March 2026 00:48:53 +0000 (0:00:00.828) 0:02:43.497 ********** 2026-03-02 00:50:35.752905 | orchestrator | changed: [testbed-manager] 2026-03-02 00:50:35.752912 | orchestrator | 2026-03-02 00:50:35.752920 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-02 00:50:35.752928 | orchestrator | Monday 02 March 2026 00:48:53 +0000 (0:00:00.568) 0:02:44.065 ********** 2026-03-02 00:50:35.752936 | orchestrator | changed: [testbed-manager] 2026-03-02 00:50:35.752944 | orchestrator | 2026-03-02 00:50:35.752951 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-02 00:50:35.752959 | orchestrator | 2026-03-02 00:50:35.752967 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-02 00:50:35.752975 | orchestrator | Monday 02 March 2026 00:48:54 +0000 (0:00:00.442) 0:02:44.507 ********** 2026-03-02 00:50:35.752983 | orchestrator | ok: [testbed-manager] 2026-03-02 00:50:35.752990 | orchestrator | 2026-03-02 00:50:35.752998 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-02 00:50:35.753006 | orchestrator | Monday 02 March 2026 00:48:54 +0000 (0:00:00.160) 0:02:44.668 ********** 2026-03-02 00:50:35.753014 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-02 00:50:35.753021 | orchestrator | 2026-03-02 00:50:35.753029 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-02 00:50:35.753037 | orchestrator | Monday 02 March 2026 00:48:54 +0000 (0:00:00.223) 0:02:44.892 ********** 2026-03-02 00:50:35.753045 | orchestrator | ok: [testbed-manager] 2026-03-02 00:50:35.753052 | orchestrator | 2026-03-02 00:50:35.753060 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-02 00:50:35.753068 | orchestrator | Monday 02 March 2026 00:48:55 +0000 (0:00:00.784) 0:02:45.676 ********** 2026-03-02 00:50:35.753081 | orchestrator | ok: [testbed-manager] 2026-03-02 00:50:35.753089 | orchestrator | 2026-03-02 00:50:35.753097 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-02 00:50:35.753105 | orchestrator | Monday 02 March 2026 00:48:56 +0000 (0:00:01.555) 0:02:47.231 ********** 2026-03-02 00:50:35.753113 | orchestrator | changed: [testbed-manager] 2026-03-02 00:50:35.753120 | orchestrator | 2026-03-02 00:50:35.753128 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-02 00:50:35.753136 | orchestrator | Monday 02 March 2026 00:48:57 +0000 (0:00:00.870) 0:02:48.102 ********** 2026-03-02 00:50:35.753150 | orchestrator | ok: [testbed-manager] 2026-03-02 00:50:35.753158 | orchestrator | 2026-03-02 00:50:35.753166 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-02 00:50:35.753174 | orchestrator | Monday 02 March 2026 00:48:58 +0000 (0:00:00.404) 0:02:48.507 ********** 2026-03-02 00:50:35.753181 | orchestrator | changed: [testbed-manager] 2026-03-02 00:50:35.753189 | orchestrator | 2026-03-02 00:50:35.753197 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-02 00:50:35.753205 | orchestrator | Monday 02 March 2026 00:49:05 +0000 (0:00:07.018) 0:02:55.525 ********** 2026-03-02 00:50:35.753212 | orchestrator | changed: [testbed-manager] 2026-03-02 00:50:35.753220 | orchestrator | 2026-03-02 00:50:35.753233 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-02 00:50:35.753241 | orchestrator | Monday 02 March 2026 00:49:17 +0000 (0:00:12.627) 0:03:08.153 ********** 2026-03-02 00:50:35.753249 | orchestrator | ok: [testbed-manager] 2026-03-02 00:50:35.753257 | orchestrator | 2026-03-02 00:50:35.753264 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-02 00:50:35.753272 | orchestrator | 2026-03-02 00:50:35.753280 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-02 00:50:35.753288 | orchestrator | Monday 02 March 2026 00:49:18 +0000 (0:00:00.525) 0:03:08.678 ********** 2026-03-02 00:50:35.753295 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.753303 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.753311 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.753319 | orchestrator | 2026-03-02 00:50:35.753327 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-02 00:50:35.753335 | orchestrator | Monday 02 March 2026 00:49:18 +0000 (0:00:00.392) 0:03:09.070 ********** 2026-03-02 00:50:35.753343 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.753350 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.753358 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.753366 | orchestrator | 2026-03-02 00:50:35.753374 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-02 00:50:35.753382 | orchestrator | Monday 02 March 2026 00:49:18 +0000 (0:00:00.322) 0:03:09.393 ********** 2026-03-02 00:50:35.753406 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:50:35.753419 | orchestrator | 2026-03-02 00:50:35.753431 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-02 00:50:35.753442 | orchestrator | Monday 02 March 2026 00:49:19 +0000 (0:00:00.628) 0:03:10.021 ********** 2026-03-02 00:50:35.753454 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-02 00:50:35.753465 | orchestrator | 2026-03-02 00:50:35.753479 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-02 00:50:35.753492 | orchestrator | Monday 02 March 2026 00:49:20 +0000 (0:00:00.758) 0:03:10.780 ********** 2026-03-02 00:50:35.753504 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 00:50:35.753517 | orchestrator | 2026-03-02 00:50:35.753530 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-02 00:50:35.753543 | orchestrator | Monday 02 March 2026 00:49:21 +0000 (0:00:00.790) 0:03:11.571 ********** 2026-03-02 00:50:35.753557 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.753570 | orchestrator | 2026-03-02 00:50:35.753583 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-02 00:50:35.753594 | orchestrator | Monday 02 March 2026 00:49:21 +0000 (0:00:00.108) 0:03:11.680 ********** 2026-03-02 00:50:35.753602 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 00:50:35.753610 | orchestrator | 2026-03-02 00:50:35.753618 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-02 00:50:35.753626 | orchestrator | Monday 02 March 2026 00:49:22 +0000 (0:00:00.982) 0:03:12.663 ********** 2026-03-02 00:50:35.753634 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.753641 | orchestrator | 2026-03-02 00:50:35.753655 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-02 00:50:35.753663 | orchestrator | Monday 02 March 2026 00:49:22 +0000 (0:00:00.114) 0:03:12.777 ********** 2026-03-02 00:50:35.753670 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.753678 | orchestrator | 2026-03-02 00:50:35.753686 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-02 00:50:35.753694 | orchestrator | Monday 02 March 2026 00:49:22 +0000 (0:00:00.125) 0:03:12.902 ********** 2026-03-02 00:50:35.753701 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.753709 | orchestrator | 2026-03-02 00:50:35.753717 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-02 00:50:35.753724 | orchestrator | Monday 02 March 2026 00:49:22 +0000 (0:00:00.125) 0:03:13.028 ********** 2026-03-02 00:50:35.753732 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.753740 | orchestrator | 2026-03-02 00:50:35.753748 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-02 00:50:35.753755 | orchestrator | Monday 02 March 2026 00:49:22 +0000 (0:00:00.110) 0:03:13.139 ********** 2026-03-02 00:50:35.753763 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-02 00:50:35.753771 | orchestrator | 2026-03-02 00:50:35.753779 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-02 00:50:35.753787 | orchestrator | Monday 02 March 2026 00:49:27 +0000 (0:00:04.721) 0:03:17.861 ********** 2026-03-02 00:50:35.753794 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-02 00:50:35.753808 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-02 00:50:35.753817 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-02 00:50:35.753825 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-02 00:50:35.753832 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-02 00:50:35.753840 | orchestrator | 2026-03-02 00:50:35.753848 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-02 00:50:35.753856 | orchestrator | Monday 02 March 2026 00:50:10 +0000 (0:00:42.728) 0:04:00.589 ********** 2026-03-02 00:50:35.753863 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 00:50:35.753871 | orchestrator | 2026-03-02 00:50:35.753879 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-02 00:50:35.753887 | orchestrator | Monday 02 March 2026 00:50:11 +0000 (0:00:01.065) 0:04:01.654 ********** 2026-03-02 00:50:35.753894 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-02 00:50:35.753902 | orchestrator | 2026-03-02 00:50:35.753910 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-02 00:50:35.753918 | orchestrator | Monday 02 March 2026 00:50:13 +0000 (0:00:02.059) 0:04:03.714 ********** 2026-03-02 00:50:35.753930 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-02 00:50:35.753938 | orchestrator | 2026-03-02 00:50:35.753946 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-02 00:50:35.753954 | orchestrator | Monday 02 March 2026 00:50:14 +0000 (0:00:01.027) 0:04:04.741 ********** 2026-03-02 00:50:35.753961 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.753969 | orchestrator | 2026-03-02 00:50:35.754112 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-02 00:50:35.754122 | orchestrator | Monday 02 March 2026 00:50:14 +0000 (0:00:00.100) 0:04:04.841 ********** 2026-03-02 00:50:35.754130 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-02 00:50:35.754138 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-02 00:50:35.754146 | orchestrator | 2026-03-02 00:50:35.754154 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-02 00:50:35.754166 | orchestrator | Monday 02 March 2026 00:50:16 +0000 (0:00:01.699) 0:04:06.540 ********** 2026-03-02 00:50:35.754193 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.754211 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.754224 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.754236 | orchestrator | 2026-03-02 00:50:35.754249 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-02 00:50:35.754262 | orchestrator | Monday 02 March 2026 00:50:16 +0000 (0:00:00.257) 0:04:06.798 ********** 2026-03-02 00:50:35.754273 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.754286 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.754299 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.754312 | orchestrator | 2026-03-02 00:50:35.754325 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-02 00:50:35.754336 | orchestrator | 2026-03-02 00:50:35.754347 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-02 00:50:35.754361 | orchestrator | Monday 02 March 2026 00:50:17 +0000 (0:00:00.976) 0:04:07.774 ********** 2026-03-02 00:50:35.754373 | orchestrator | ok: [testbed-manager] 2026-03-02 00:50:35.754385 | orchestrator | 2026-03-02 00:50:35.754418 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-02 00:50:35.754431 | orchestrator | Monday 02 March 2026 00:50:17 +0000 (0:00:00.140) 0:04:07.915 ********** 2026-03-02 00:50:35.754444 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-02 00:50:35.754458 | orchestrator | 2026-03-02 00:50:35.754470 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-02 00:50:35.754483 | orchestrator | Monday 02 March 2026 00:50:17 +0000 (0:00:00.188) 0:04:08.104 ********** 2026-03-02 00:50:35.754496 | orchestrator | changed: [testbed-manager] 2026-03-02 00:50:35.754509 | orchestrator | 2026-03-02 00:50:35.754522 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-02 00:50:35.754535 | orchestrator | 2026-03-02 00:50:35.754549 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-02 00:50:35.754562 | orchestrator | Monday 02 March 2026 00:50:22 +0000 (0:00:04.940) 0:04:13.044 ********** 2026-03-02 00:50:35.754577 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:50:35.754590 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:50:35.754604 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:50:35.754617 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:50:35.754631 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:50:35.754644 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:50:35.754656 | orchestrator | 2026-03-02 00:50:35.754670 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-02 00:50:35.754684 | orchestrator | Monday 02 March 2026 00:50:23 +0000 (0:00:00.764) 0:04:13.809 ********** 2026-03-02 00:50:35.754698 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-02 00:50:35.754712 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-02 00:50:35.754725 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-02 00:50:35.754739 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-02 00:50:35.754752 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-02 00:50:35.754765 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-02 00:50:35.754778 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-02 00:50:35.754792 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-02 00:50:35.754817 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-02 00:50:35.754833 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-02 00:50:35.754846 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-02 00:50:35.754872 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-02 00:50:35.754886 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-02 00:50:35.754899 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-02 00:50:35.754912 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-02 00:50:35.754926 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-02 00:50:35.754940 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-02 00:50:35.754961 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-02 00:50:35.754974 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-02 00:50:35.754986 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-02 00:50:35.754999 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-02 00:50:35.755012 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-02 00:50:35.755025 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-02 00:50:35.755039 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-02 00:50:35.755053 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-02 00:50:35.755066 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-02 00:50:35.755078 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-02 00:50:35.755091 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-02 00:50:35.755105 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-02 00:50:35.755119 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-02 00:50:35.755132 | orchestrator | 2026-03-02 00:50:35.755146 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-02 00:50:35.755159 | orchestrator | Monday 02 March 2026 00:50:34 +0000 (0:00:10.857) 0:04:24.666 ********** 2026-03-02 00:50:35.755172 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.755184 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.755198 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.755210 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.755223 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.755237 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.755251 | orchestrator | 2026-03-02 00:50:35.755263 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-02 00:50:35.755276 | orchestrator | Monday 02 March 2026 00:50:34 +0000 (0:00:00.525) 0:04:25.192 ********** 2026-03-02 00:50:35.755289 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:50:35.755302 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:50:35.755315 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:50:35.755329 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:50:35.755341 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:50:35.755355 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:50:35.755368 | orchestrator | 2026-03-02 00:50:35.755381 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:50:35.755451 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:50:35.755469 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-02 00:50:35.755489 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-02 00:50:35.755497 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-02 00:50:35.755505 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-02 00:50:35.755513 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-02 00:50:35.755521 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-02 00:50:35.755528 | orchestrator | 2026-03-02 00:50:35.755536 | orchestrator | 2026-03-02 00:50:35.755544 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:50:35.755560 | orchestrator | Monday 02 March 2026 00:50:35 +0000 (0:00:00.407) 0:04:25.600 ********** 2026-03-02 00:50:35.755568 | orchestrator | =============================================================================== 2026-03-02 00:50:35.755576 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.05s 2026-03-02 00:50:35.755585 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.73s 2026-03-02 00:50:35.755592 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.41s 2026-03-02 00:50:35.755600 | orchestrator | kubectl : Install required packages ------------------------------------ 12.63s 2026-03-02 00:50:35.755608 | orchestrator | Manage labels ---------------------------------------------------------- 10.86s 2026-03-02 00:50:35.755616 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.75s 2026-03-02 00:50:35.755623 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.02s 2026-03-02 00:50:35.755631 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.60s 2026-03-02 00:50:35.755650 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.94s 2026-03-02 00:50:35.755658 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.72s 2026-03-02 00:50:35.755666 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.28s 2026-03-02 00:50:35.755674 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.27s 2026-03-02 00:50:35.755681 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.51s 2026-03-02 00:50:35.755689 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.06s 2026-03-02 00:50:35.755697 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.00s 2026-03-02 00:50:35.755705 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.00s 2026-03-02 00:50:35.755713 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.94s 2026-03-02 00:50:35.755721 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.93s 2026-03-02 00:50:35.755728 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.83s 2026-03-02 00:50:35.755736 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.71s 2026-03-02 00:50:35.755744 | orchestrator | 2026-03-02 00:50:35 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:35.755752 | orchestrator | 2026-03-02 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:38.915120 | orchestrator | 2026-03-02 00:50:38 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:38.915538 | orchestrator | 2026-03-02 00:50:38 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:38.916015 | orchestrator | 2026-03-02 00:50:38 | INFO  | Task 7a3d7632-178f-4cf9-80c1-83a77ad44acf is in state STARTED 2026-03-02 00:50:38.916868 | orchestrator | 2026-03-02 00:50:38 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:38.917606 | orchestrator | 2026-03-02 00:50:38 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:38.918379 | orchestrator | 2026-03-02 00:50:38 | INFO  | Task 4f2df36b-89f1-4362-a0a4-64c57d5e3741 is in state STARTED 2026-03-02 00:50:38.918422 | orchestrator | 2026-03-02 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:41.956836 | orchestrator | 2026-03-02 00:50:41 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:41.956906 | orchestrator | 2026-03-02 00:50:41 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:41.956913 | orchestrator | 2026-03-02 00:50:41 | INFO  | Task 7a3d7632-178f-4cf9-80c1-83a77ad44acf is in state STARTED 2026-03-02 00:50:41.956917 | orchestrator | 2026-03-02 00:50:41 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:41.956922 | orchestrator | 2026-03-02 00:50:41 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:41.956927 | orchestrator | 2026-03-02 00:50:41 | INFO  | Task 4f2df36b-89f1-4362-a0a4-64c57d5e3741 is in state STARTED 2026-03-02 00:50:41.956931 | orchestrator | 2026-03-02 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:44.982828 | orchestrator | 2026-03-02 00:50:44 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:44.982960 | orchestrator | 2026-03-02 00:50:44 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:44.983294 | orchestrator | 2026-03-02 00:50:44 | INFO  | Task 7a3d7632-178f-4cf9-80c1-83a77ad44acf is in state STARTED 2026-03-02 00:50:44.984518 | orchestrator | 2026-03-02 00:50:44 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:44.984742 | orchestrator | 2026-03-02 00:50:44 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:44.985193 | orchestrator | 2026-03-02 00:50:44 | INFO  | Task 4f2df36b-89f1-4362-a0a4-64c57d5e3741 is in state SUCCESS 2026-03-02 00:50:44.985244 | orchestrator | 2026-03-02 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:48.014869 | orchestrator | 2026-03-02 00:50:48 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:48.014993 | orchestrator | 2026-03-02 00:50:48 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:48.015565 | orchestrator | 2026-03-02 00:50:48 | INFO  | Task 7a3d7632-178f-4cf9-80c1-83a77ad44acf is in state SUCCESS 2026-03-02 00:50:48.016250 | orchestrator | 2026-03-02 00:50:48 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:48.017298 | orchestrator | 2026-03-02 00:50:48 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:48.017379 | orchestrator | 2026-03-02 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:51.052192 | orchestrator | 2026-03-02 00:50:51 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:51.053546 | orchestrator | 2026-03-02 00:50:51 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:51.054430 | orchestrator | 2026-03-02 00:50:51 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:51.055678 | orchestrator | 2026-03-02 00:50:51 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:51.055791 | orchestrator | 2026-03-02 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:54.102538 | orchestrator | 2026-03-02 00:50:54 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:54.108885 | orchestrator | 2026-03-02 00:50:54 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:54.111193 | orchestrator | 2026-03-02 00:50:54 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:54.112049 | orchestrator | 2026-03-02 00:50:54 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:54.112098 | orchestrator | 2026-03-02 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:50:57.156561 | orchestrator | 2026-03-02 00:50:57 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:50:57.159532 | orchestrator | 2026-03-02 00:50:57 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:50:57.160371 | orchestrator | 2026-03-02 00:50:57 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:50:57.161429 | orchestrator | 2026-03-02 00:50:57 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:50:57.161679 | orchestrator | 2026-03-02 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:00.205490 | orchestrator | 2026-03-02 00:51:00 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:00.206010 | orchestrator | 2026-03-02 00:51:00 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:51:00.207003 | orchestrator | 2026-03-02 00:51:00 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:00.209590 | orchestrator | 2026-03-02 00:51:00 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:00.209627 | orchestrator | 2026-03-02 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:03.244649 | orchestrator | 2026-03-02 00:51:03 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:03.245056 | orchestrator | 2026-03-02 00:51:03 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state STARTED 2026-03-02 00:51:03.246232 | orchestrator | 2026-03-02 00:51:03 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:03.247138 | orchestrator | 2026-03-02 00:51:03 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:03.247164 | orchestrator | 2026-03-02 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:06.290467 | orchestrator | 2026-03-02 00:51:06 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:06.290806 | orchestrator | 2026-03-02 00:51:06 | INFO  | Task cc65b781-dbfd-4713-aa18-8c9f3bf60b93 is in state SUCCESS 2026-03-02 00:51:06.291021 | orchestrator | 2026-03-02 00:51:06.291042 | orchestrator | 2026-03-02 00:51:06.291051 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-02 00:51:06.291060 | orchestrator | 2026-03-02 00:51:06.291068 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-02 00:51:06.291074 | orchestrator | Monday 02 March 2026 00:50:39 +0000 (0:00:00.122) 0:00:00.123 ********** 2026-03-02 00:51:06.291081 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-02 00:51:06.291138 | orchestrator | 2026-03-02 00:51:06.291145 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-02 00:51:06.291151 | orchestrator | Monday 02 March 2026 00:50:40 +0000 (0:00:00.672) 0:00:00.795 ********** 2026-03-02 00:51:06.291157 | orchestrator | changed: [testbed-manager] 2026-03-02 00:51:06.291164 | orchestrator | 2026-03-02 00:51:06.291171 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-02 00:51:06.291179 | orchestrator | Monday 02 March 2026 00:50:41 +0000 (0:00:00.884) 0:00:01.680 ********** 2026-03-02 00:51:06.291186 | orchestrator | changed: [testbed-manager] 2026-03-02 00:51:06.291192 | orchestrator | 2026-03-02 00:51:06.291254 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:51:06.291265 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:51:06.291274 | orchestrator | 2026-03-02 00:51:06.291281 | orchestrator | 2026-03-02 00:51:06.291289 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:51:06.291297 | orchestrator | Monday 02 March 2026 00:50:42 +0000 (0:00:00.628) 0:00:02.308 ********** 2026-03-02 00:51:06.291304 | orchestrator | =============================================================================== 2026-03-02 00:51:06.291311 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.88s 2026-03-02 00:51:06.291319 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.67s 2026-03-02 00:51:06.291326 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.63s 2026-03-02 00:51:06.291334 | orchestrator | 2026-03-02 00:51:06.291341 | orchestrator | 2026-03-02 00:51:06.291348 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-02 00:51:06.291356 | orchestrator | 2026-03-02 00:51:06.291363 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-02 00:51:06.291398 | orchestrator | Monday 02 March 2026 00:50:39 +0000 (0:00:00.140) 0:00:00.140 ********** 2026-03-02 00:51:06.291405 | orchestrator | ok: [testbed-manager] 2026-03-02 00:51:06.291412 | orchestrator | 2026-03-02 00:51:06.291420 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-02 00:51:06.291427 | orchestrator | Monday 02 March 2026 00:50:40 +0000 (0:00:00.565) 0:00:00.706 ********** 2026-03-02 00:51:06.291434 | orchestrator | ok: [testbed-manager] 2026-03-02 00:51:06.291441 | orchestrator | 2026-03-02 00:51:06.291450 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-02 00:51:06.291457 | orchestrator | Monday 02 March 2026 00:50:40 +0000 (0:00:00.436) 0:00:01.142 ********** 2026-03-02 00:51:06.291468 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-02 00:51:06.291479 | orchestrator | 2026-03-02 00:51:06.291486 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-02 00:51:06.291492 | orchestrator | Monday 02 March 2026 00:50:41 +0000 (0:00:00.751) 0:00:01.894 ********** 2026-03-02 00:51:06.291500 | orchestrator | changed: [testbed-manager] 2026-03-02 00:51:06.291506 | orchestrator | 2026-03-02 00:51:06.291514 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-02 00:51:06.291571 | orchestrator | Monday 02 March 2026 00:50:42 +0000 (0:00:01.141) 0:00:03.036 ********** 2026-03-02 00:51:06.291582 | orchestrator | changed: [testbed-manager] 2026-03-02 00:51:06.291591 | orchestrator | 2026-03-02 00:51:06.291598 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-02 00:51:06.291606 | orchestrator | Monday 02 March 2026 00:50:43 +0000 (0:00:00.511) 0:00:03.547 ********** 2026-03-02 00:51:06.291614 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-02 00:51:06.291621 | orchestrator | 2026-03-02 00:51:06.291629 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-02 00:51:06.291636 | orchestrator | Monday 02 March 2026 00:50:44 +0000 (0:00:01.621) 0:00:05.169 ********** 2026-03-02 00:51:06.291643 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-02 00:51:06.291662 | orchestrator | 2026-03-02 00:51:06.291673 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-02 00:51:06.291703 | orchestrator | Monday 02 March 2026 00:50:45 +0000 (0:00:00.859) 0:00:06.029 ********** 2026-03-02 00:51:06.291715 | orchestrator | ok: [testbed-manager] 2026-03-02 00:51:06.291724 | orchestrator | 2026-03-02 00:51:06.291732 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-02 00:51:06.291740 | orchestrator | Monday 02 March 2026 00:50:46 +0000 (0:00:00.374) 0:00:06.404 ********** 2026-03-02 00:51:06.291748 | orchestrator | ok: [testbed-manager] 2026-03-02 00:51:06.291756 | orchestrator | 2026-03-02 00:51:06.291764 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:51:06.291772 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:51:06.291781 | orchestrator | 2026-03-02 00:51:06.291788 | orchestrator | 2026-03-02 00:51:06.291796 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:51:06.291804 | orchestrator | Monday 02 March 2026 00:50:46 +0000 (0:00:00.293) 0:00:06.698 ********** 2026-03-02 00:51:06.291813 | orchestrator | =============================================================================== 2026-03-02 00:51:06.291822 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.62s 2026-03-02 00:51:06.291830 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.14s 2026-03-02 00:51:06.291837 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.86s 2026-03-02 00:51:06.291860 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2026-03-02 00:51:06.291869 | orchestrator | Get home directory of operator user ------------------------------------- 0.57s 2026-03-02 00:51:06.291877 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.51s 2026-03-02 00:51:06.291885 | orchestrator | Create .kube directory -------------------------------------------------- 0.44s 2026-03-02 00:51:06.291893 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.38s 2026-03-02 00:51:06.291902 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.29s 2026-03-02 00:51:06.291910 | orchestrator | 2026-03-02 00:51:06.292138 | orchestrator | 2026-03-02 00:51:06.292157 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-02 00:51:06.292165 | orchestrator | 2026-03-02 00:51:06.292173 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-02 00:51:06.292180 | orchestrator | Monday 02 March 2026 00:48:44 +0000 (0:00:00.099) 0:00:00.099 ********** 2026-03-02 00:51:06.292195 | orchestrator | ok: [localhost] => { 2026-03-02 00:51:06.292204 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-02 00:51:06.292212 | orchestrator | } 2026-03-02 00:51:06.292220 | orchestrator | 2026-03-02 00:51:06.292227 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-02 00:51:06.292234 | orchestrator | Monday 02 March 2026 00:48:44 +0000 (0:00:00.033) 0:00:00.133 ********** 2026-03-02 00:51:06.292243 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-02 00:51:06.292252 | orchestrator | ...ignoring 2026-03-02 00:51:06.292260 | orchestrator | 2026-03-02 00:51:06.292266 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-02 00:51:06.292273 | orchestrator | Monday 02 March 2026 00:48:48 +0000 (0:00:04.027) 0:00:04.160 ********** 2026-03-02 00:51:06.292280 | orchestrator | skipping: [localhost] 2026-03-02 00:51:06.292288 | orchestrator | 2026-03-02 00:51:06.292295 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-02 00:51:06.292303 | orchestrator | Monday 02 March 2026 00:48:48 +0000 (0:00:00.042) 0:00:04.203 ********** 2026-03-02 00:51:06.292310 | orchestrator | ok: [localhost] 2026-03-02 00:51:06.292327 | orchestrator | 2026-03-02 00:51:06.292334 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:51:06.292342 | orchestrator | 2026-03-02 00:51:06.292350 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 00:51:06.292358 | orchestrator | Monday 02 March 2026 00:48:49 +0000 (0:00:00.136) 0:00:04.339 ********** 2026-03-02 00:51:06.292388 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:06.292397 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:06.292404 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:06.292412 | orchestrator | 2026-03-02 00:51:06.292419 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 00:51:06.292427 | orchestrator | Monday 02 March 2026 00:48:49 +0000 (0:00:00.296) 0:00:04.636 ********** 2026-03-02 00:51:06.292433 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-02 00:51:06.292440 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-02 00:51:06.292447 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-02 00:51:06.292454 | orchestrator | 2026-03-02 00:51:06.292461 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-02 00:51:06.292468 | orchestrator | 2026-03-02 00:51:06.292475 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-02 00:51:06.292482 | orchestrator | Monday 02 March 2026 00:48:50 +0000 (0:00:00.840) 0:00:05.477 ********** 2026-03-02 00:51:06.292490 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:51:06.292497 | orchestrator | 2026-03-02 00:51:06.292504 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-02 00:51:06.292511 | orchestrator | Monday 02 March 2026 00:48:50 +0000 (0:00:00.432) 0:00:05.909 ********** 2026-03-02 00:51:06.292517 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:06.292524 | orchestrator | 2026-03-02 00:51:06.292532 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-02 00:51:06.292539 | orchestrator | Monday 02 March 2026 00:48:51 +0000 (0:00:00.956) 0:00:06.865 ********** 2026-03-02 00:51:06.292546 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:06.292554 | orchestrator | 2026-03-02 00:51:06.292560 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-02 00:51:06.292568 | orchestrator | Monday 02 March 2026 00:48:51 +0000 (0:00:00.369) 0:00:07.235 ********** 2026-03-02 00:51:06.292574 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:06.292582 | orchestrator | 2026-03-02 00:51:06.292590 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-02 00:51:06.292597 | orchestrator | Monday 02 March 2026 00:48:52 +0000 (0:00:00.356) 0:00:07.592 ********** 2026-03-02 00:51:06.292605 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:06.292612 | orchestrator | 2026-03-02 00:51:06.292620 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-02 00:51:06.292628 | orchestrator | Monday 02 March 2026 00:48:52 +0000 (0:00:00.435) 0:00:08.027 ********** 2026-03-02 00:51:06.292634 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:06.292642 | orchestrator | 2026-03-02 00:51:06.292650 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-02 00:51:06.292657 | orchestrator | Monday 02 March 2026 00:48:54 +0000 (0:00:01.251) 0:00:09.279 ********** 2026-03-02 00:51:06.292664 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:51:06.292671 | orchestrator | 2026-03-02 00:51:06.292679 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-02 00:51:06.292688 | orchestrator | Monday 02 March 2026 00:48:55 +0000 (0:00:01.098) 0:00:10.378 ********** 2026-03-02 00:51:06.292701 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:06.292708 | orchestrator | 2026-03-02 00:51:06.292716 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-02 00:51:06.292730 | orchestrator | Monday 02 March 2026 00:48:56 +0000 (0:00:01.096) 0:00:11.474 ********** 2026-03-02 00:51:06.292738 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:06.292745 | orchestrator | 2026-03-02 00:51:06.292752 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-02 00:51:06.292759 | orchestrator | Monday 02 March 2026 00:48:56 +0000 (0:00:00.749) 0:00:12.223 ********** 2026-03-02 00:51:06.292767 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:06.292775 | orchestrator | 2026-03-02 00:51:06.292801 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-02 00:51:06.292812 | orchestrator | Monday 02 March 2026 00:48:57 +0000 (0:00:00.892) 0:00:13.116 ********** 2026-03-02 00:51:06.292830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:51:06.292843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:51:06.292854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:51:06.292863 | orchestrator | 2026-03-02 00:51:06.292871 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-02 00:51:06.292885 | orchestrator | Monday 02 March 2026 00:48:59 +0000 (0:00:01.951) 0:00:15.068 ********** 2026-03-02 00:51:06.292906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:51:06.292918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:51:06.292927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:51:06.292936 | orchestrator | 2026-03-02 00:51:06.292944 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-02 00:51:06.292953 | orchestrator | Monday 02 March 2026 00:49:01 +0000 (0:00:01.732) 0:00:16.800 ********** 2026-03-02 00:51:06.292961 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-02 00:51:06.292969 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-02 00:51:06.292978 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-02 00:51:06.292985 | orchestrator | 2026-03-02 00:51:06.292992 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-02 00:51:06.293003 | orchestrator | Monday 02 March 2026 00:49:03 +0000 (0:00:01.800) 0:00:18.601 ********** 2026-03-02 00:51:06.293010 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-02 00:51:06.293018 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-02 00:51:06.293024 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-02 00:51:06.293075 | orchestrator | 2026-03-02 00:51:06.293087 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-02 00:51:06.293095 | orchestrator | Monday 02 March 2026 00:49:05 +0000 (0:00:01.835) 0:00:20.436 ********** 2026-03-02 00:51:06.293104 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-02 00:51:06.293112 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-02 00:51:06.293121 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-02 00:51:06.293129 | orchestrator | 2026-03-02 00:51:06.293136 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-02 00:51:06.293144 | orchestrator | Monday 02 March 2026 00:49:06 +0000 (0:00:01.367) 0:00:21.804 ********** 2026-03-02 00:51:06.293156 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-02 00:51:06.293164 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-02 00:51:06.293176 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-02 00:51:06.293184 | orchestrator | 2026-03-02 00:51:06.293191 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-02 00:51:06.293220 | orchestrator | Monday 02 March 2026 00:49:09 +0000 (0:00:03.305) 0:00:25.110 ********** 2026-03-02 00:51:06.293228 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-02 00:51:06.293236 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-02 00:51:06.293243 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-02 00:51:06.293249 | orchestrator | 2026-03-02 00:51:06.293256 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-02 00:51:06.293264 | orchestrator | Monday 02 March 2026 00:49:11 +0000 (0:00:01.763) 0:00:26.874 ********** 2026-03-02 00:51:06.293271 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-02 00:51:06.293278 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-02 00:51:06.293286 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-02 00:51:06.293293 | orchestrator | 2026-03-02 00:51:06.293300 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-02 00:51:06.293308 | orchestrator | Monday 02 March 2026 00:49:13 +0000 (0:00:01.770) 0:00:28.644 ********** 2026-03-02 00:51:06.293314 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:06.293321 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:06.293327 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:06.293334 | orchestrator | 2026-03-02 00:51:06.293342 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-02 00:51:06.293349 | orchestrator | Monday 02 March 2026 00:49:13 +0000 (0:00:00.557) 0:00:29.201 ********** 2026-03-02 00:51:06.293357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:51:06.293414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:51:06.293432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:51:06.293440 | orchestrator | 2026-03-02 00:51:06.293447 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-02 00:51:06.293454 | orchestrator | Monday 02 March 2026 00:49:15 +0000 (0:00:01.557) 0:00:30.759 ********** 2026-03-02 00:51:06.293460 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:06.293467 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:06.293474 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:06.293481 | orchestrator | 2026-03-02 00:51:06.293489 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-02 00:51:06.293496 | orchestrator | Monday 02 March 2026 00:49:16 +0000 (0:00:01.182) 0:00:31.942 ********** 2026-03-02 00:51:06.293503 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:06.293510 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:06.293517 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:06.293524 | orchestrator | 2026-03-02 00:51:06.293531 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-02 00:51:06.293537 | orchestrator | Monday 02 March 2026 00:49:24 +0000 (0:00:07.560) 0:00:39.502 ********** 2026-03-02 00:51:06.293549 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:06.293556 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:06.293564 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:06.293571 | orchestrator | 2026-03-02 00:51:06.293578 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-02 00:51:06.293585 | orchestrator | 2026-03-02 00:51:06.293592 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-02 00:51:06.293599 | orchestrator | Monday 02 March 2026 00:49:24 +0000 (0:00:00.546) 0:00:40.048 ********** 2026-03-02 00:51:06.293606 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:06.293613 | orchestrator | 2026-03-02 00:51:06.293621 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-02 00:51:06.293628 | orchestrator | Monday 02 March 2026 00:49:25 +0000 (0:00:00.729) 0:00:40.777 ********** 2026-03-02 00:51:06.293637 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:06.293645 | orchestrator | 2026-03-02 00:51:06.293651 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-02 00:51:06.293658 | orchestrator | Monday 02 March 2026 00:49:25 +0000 (0:00:00.332) 0:00:41.110 ********** 2026-03-02 00:51:06.293664 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:06.293669 | orchestrator | 2026-03-02 00:51:06.293675 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-02 00:51:06.293681 | orchestrator | Monday 02 March 2026 00:49:32 +0000 (0:00:06.870) 0:00:47.981 ********** 2026-03-02 00:51:06.293686 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:06.293692 | orchestrator | 2026-03-02 00:51:06.293698 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-02 00:51:06.293705 | orchestrator | 2026-03-02 00:51:06.293713 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-02 00:51:06.293720 | orchestrator | Monday 02 March 2026 00:50:25 +0000 (0:00:53.240) 0:01:41.222 ********** 2026-03-02 00:51:06.293727 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:06.293733 | orchestrator | 2026-03-02 00:51:06.293740 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-02 00:51:06.293747 | orchestrator | Monday 02 March 2026 00:50:26 +0000 (0:00:00.640) 0:01:41.862 ********** 2026-03-02 00:51:06.293753 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:06.293760 | orchestrator | 2026-03-02 00:51:06.293767 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-02 00:51:06.293774 | orchestrator | Monday 02 March 2026 00:50:26 +0000 (0:00:00.208) 0:01:42.071 ********** 2026-03-02 00:51:06.293780 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:06.293787 | orchestrator | 2026-03-02 00:51:06.293794 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-02 00:51:06.293801 | orchestrator | Monday 02 March 2026 00:50:28 +0000 (0:00:01.526) 0:01:43.597 ********** 2026-03-02 00:51:06.293808 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:06.293815 | orchestrator | 2026-03-02 00:51:06.293822 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-02 00:51:06.293828 | orchestrator | 2026-03-02 00:51:06.293835 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-02 00:51:06.293843 | orchestrator | Monday 02 March 2026 00:50:44 +0000 (0:00:15.934) 0:01:59.532 ********** 2026-03-02 00:51:06.293850 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:06.293857 | orchestrator | 2026-03-02 00:51:06.293942 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-02 00:51:06.293966 | orchestrator | Monday 02 March 2026 00:50:44 +0000 (0:00:00.647) 0:02:00.180 ********** 2026-03-02 00:51:06.293974 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:06.293981 | orchestrator | 2026-03-02 00:51:06.293987 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-02 00:51:06.293993 | orchestrator | Monday 02 March 2026 00:50:45 +0000 (0:00:00.197) 0:02:00.377 ********** 2026-03-02 00:51:06.293999 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:06.294059 | orchestrator | 2026-03-02 00:51:06.294070 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-02 00:51:06.294088 | orchestrator | Monday 02 March 2026 00:50:46 +0000 (0:00:01.635) 0:02:02.013 ********** 2026-03-02 00:51:06.294096 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:06.294103 | orchestrator | 2026-03-02 00:51:06.294111 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-02 00:51:06.294119 | orchestrator | 2026-03-02 00:51:06.294126 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-02 00:51:06.294137 | orchestrator | Monday 02 March 2026 00:51:00 +0000 (0:00:14.105) 0:02:16.119 ********** 2026-03-02 00:51:06.294145 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:51:06.294153 | orchestrator | 2026-03-02 00:51:06.294160 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-02 00:51:06.294167 | orchestrator | Monday 02 March 2026 00:51:01 +0000 (0:00:00.714) 0:02:16.833 ********** 2026-03-02 00:51:06.294175 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:06.294182 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:06.294189 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:06.294196 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-02 00:51:06.294204 | orchestrator | enable_outward_rabbitmq_True 2026-03-02 00:51:06.294211 | orchestrator | 2026-03-02 00:51:06.294218 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-02 00:51:06.294225 | orchestrator | skipping: no hosts matched 2026-03-02 00:51:06.294231 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-02 00:51:06.294239 | orchestrator | outward_rabbitmq_restart 2026-03-02 00:51:06.294246 | orchestrator | 2026-03-02 00:51:06.294253 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-02 00:51:06.294260 | orchestrator | skipping: no hosts matched 2026-03-02 00:51:06.294266 | orchestrator | 2026-03-02 00:51:06.294273 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-02 00:51:06.294281 | orchestrator | skipping: no hosts matched 2026-03-02 00:51:06.294288 | orchestrator | 2026-03-02 00:51:06.294295 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:51:06.294303 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-02 00:51:06.294312 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-02 00:51:06.294319 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:51:06.294326 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 00:51:06.294333 | orchestrator | 2026-03-02 00:51:06.294341 | orchestrator | 2026-03-02 00:51:06.294348 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:51:06.294355 | orchestrator | Monday 02 March 2026 00:51:04 +0000 (0:00:02.677) 0:02:19.511 ********** 2026-03-02 00:51:06.294362 | orchestrator | =============================================================================== 2026-03-02 00:51:06.294417 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.28s 2026-03-02 00:51:06.294425 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.03s 2026-03-02 00:51:06.294432 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.56s 2026-03-02 00:51:06.294440 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.03s 2026-03-02 00:51:06.294446 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.31s 2026-03-02 00:51:06.294453 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.68s 2026-03-02 00:51:06.294467 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.02s 2026-03-02 00:51:06.294474 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.95s 2026-03-02 00:51:06.294481 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.84s 2026-03-02 00:51:06.294488 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.80s 2026-03-02 00:51:06.294495 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.77s 2026-03-02 00:51:06.294502 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.76s 2026-03-02 00:51:06.294509 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.73s 2026-03-02 00:51:06.294516 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.56s 2026-03-02 00:51:06.294523 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.37s 2026-03-02 00:51:06.294530 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.25s 2026-03-02 00:51:06.294537 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.18s 2026-03-02 00:51:06.294544 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.10s 2026-03-02 00:51:06.294551 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.10s 2026-03-02 00:51:06.294558 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.96s 2026-03-02 00:51:06.294566 | orchestrator | 2026-03-02 00:51:06 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:06.294578 | orchestrator | 2026-03-02 00:51:06 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:06.294585 | orchestrator | 2026-03-02 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:09.336870 | orchestrator | 2026-03-02 00:51:09 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:09.339633 | orchestrator | 2026-03-02 00:51:09 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:09.339689 | orchestrator | 2026-03-02 00:51:09 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:09.339697 | orchestrator | 2026-03-02 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:12.380520 | orchestrator | 2026-03-02 00:51:12 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:12.380776 | orchestrator | 2026-03-02 00:51:12 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:12.381475 | orchestrator | 2026-03-02 00:51:12 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:12.381505 | orchestrator | 2026-03-02 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:15.417676 | orchestrator | 2026-03-02 00:51:15 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:15.419615 | orchestrator | 2026-03-02 00:51:15 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:15.420284 | orchestrator | 2026-03-02 00:51:15 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:15.420352 | orchestrator | 2026-03-02 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:18.455228 | orchestrator | 2026-03-02 00:51:18 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:18.456103 | orchestrator | 2026-03-02 00:51:18 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:18.457268 | orchestrator | 2026-03-02 00:51:18 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:18.457487 | orchestrator | 2026-03-02 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:21.493860 | orchestrator | 2026-03-02 00:51:21 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:21.494165 | orchestrator | 2026-03-02 00:51:21 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:21.495037 | orchestrator | 2026-03-02 00:51:21 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:21.495075 | orchestrator | 2026-03-02 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:24.542874 | orchestrator | 2026-03-02 00:51:24 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:24.543057 | orchestrator | 2026-03-02 00:51:24 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:24.544024 | orchestrator | 2026-03-02 00:51:24 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:24.544105 | orchestrator | 2026-03-02 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:27.573262 | orchestrator | 2026-03-02 00:51:27 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:27.574087 | orchestrator | 2026-03-02 00:51:27 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:27.575031 | orchestrator | 2026-03-02 00:51:27 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:27.575087 | orchestrator | 2026-03-02 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:30.616848 | orchestrator | 2026-03-02 00:51:30 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:30.618802 | orchestrator | 2026-03-02 00:51:30 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:30.620933 | orchestrator | 2026-03-02 00:51:30 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:30.620971 | orchestrator | 2026-03-02 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:33.659588 | orchestrator | 2026-03-02 00:51:33 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:33.660885 | orchestrator | 2026-03-02 00:51:33 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:33.664000 | orchestrator | 2026-03-02 00:51:33 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:33.664075 | orchestrator | 2026-03-02 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:36.703559 | orchestrator | 2026-03-02 00:51:36 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:36.704255 | orchestrator | 2026-03-02 00:51:36 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:36.704623 | orchestrator | 2026-03-02 00:51:36 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:36.704679 | orchestrator | 2026-03-02 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:39.739183 | orchestrator | 2026-03-02 00:51:39 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:39.741082 | orchestrator | 2026-03-02 00:51:39 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:39.742757 | orchestrator | 2026-03-02 00:51:39 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:39.742872 | orchestrator | 2026-03-02 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:42.776770 | orchestrator | 2026-03-02 00:51:42 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:42.777114 | orchestrator | 2026-03-02 00:51:42 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:42.777870 | orchestrator | 2026-03-02 00:51:42 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:42.777907 | orchestrator | 2026-03-02 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:45.813897 | orchestrator | 2026-03-02 00:51:45 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:45.814974 | orchestrator | 2026-03-02 00:51:45 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:45.815992 | orchestrator | 2026-03-02 00:51:45 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:45.816032 | orchestrator | 2026-03-02 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:48.848549 | orchestrator | 2026-03-02 00:51:48 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:48.849863 | orchestrator | 2026-03-02 00:51:48 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:48.851752 | orchestrator | 2026-03-02 00:51:48 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state STARTED 2026-03-02 00:51:48.854057 | orchestrator | 2026-03-02 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:51.888297 | orchestrator | 2026-03-02 00:51:51 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:51.889694 | orchestrator | 2026-03-02 00:51:51 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:51.892593 | orchestrator | 2026-03-02 00:51:51 | INFO  | Task 50ba1490-52f3-416a-a479-7ffe1102a90d is in state SUCCESS 2026-03-02 00:51:51.892654 | orchestrator | 2026-03-02 00:51:51.893964 | orchestrator | 2026-03-02 00:51:51.894050 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:51:51.894064 | orchestrator | 2026-03-02 00:51:51.894071 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 00:51:51.894077 | orchestrator | Monday 02 March 2026 00:49:34 +0000 (0:00:00.154) 0:00:00.154 ********** 2026-03-02 00:51:51.894083 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:51:51.894090 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:51:51.894096 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:51:51.894101 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.894107 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.894112 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.894117 | orchestrator | 2026-03-02 00:51:51.894123 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 00:51:51.894129 | orchestrator | Monday 02 March 2026 00:49:34 +0000 (0:00:00.618) 0:00:00.772 ********** 2026-03-02 00:51:51.894134 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-02 00:51:51.894140 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-02 00:51:51.894146 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-02 00:51:51.894152 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-02 00:51:51.894157 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-02 00:51:51.894163 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-02 00:51:51.894168 | orchestrator | 2026-03-02 00:51:51.894174 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-02 00:51:51.894179 | orchestrator | 2026-03-02 00:51:51.894185 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-02 00:51:51.894209 | orchestrator | Monday 02 March 2026 00:49:35 +0000 (0:00:00.791) 0:00:01.564 ********** 2026-03-02 00:51:51.894216 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:51:51.894312 | orchestrator | 2026-03-02 00:51:51.894343 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-02 00:51:51.894354 | orchestrator | Monday 02 March 2026 00:49:36 +0000 (0:00:00.971) 0:00:02.536 ********** 2026-03-02 00:51:51.894373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894390 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894425 | orchestrator | 2026-03-02 00:51:51.894443 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-02 00:51:51.894449 | orchestrator | Monday 02 March 2026 00:49:37 +0000 (0:00:01.141) 0:00:03.678 ********** 2026-03-02 00:51:51.894455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894499 | orchestrator | 2026-03-02 00:51:51.894504 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-02 00:51:51.894510 | orchestrator | Monday 02 March 2026 00:49:39 +0000 (0:00:01.511) 0:00:05.189 ********** 2026-03-02 00:51:51.894516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894562 | orchestrator | 2026-03-02 00:51:51.894568 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-02 00:51:51.894575 | orchestrator | Monday 02 March 2026 00:49:40 +0000 (0:00:01.101) 0:00:06.290 ********** 2026-03-02 00:51:51.894584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894598 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894624 | orchestrator | 2026-03-02 00:51:51.894638 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-02 00:51:51.894645 | orchestrator | Monday 02 March 2026 00:49:41 +0000 (0:00:01.252) 0:00:07.543 ********** 2026-03-02 00:51:51.894651 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894658 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.894695 | orchestrator | 2026-03-02 00:51:51.894702 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-02 00:51:51.894708 | orchestrator | Monday 02 March 2026 00:49:42 +0000 (0:00:01.195) 0:00:08.739 ********** 2026-03-02 00:51:51.894715 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:51:51.894722 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:51:51.894728 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:51:51.894734 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:51.894740 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:51.894747 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:51.894753 | orchestrator | 2026-03-02 00:51:51.894759 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-02 00:51:51.894766 | orchestrator | Monday 02 March 2026 00:49:45 +0000 (0:00:02.179) 0:00:10.918 ********** 2026-03-02 00:51:51.894772 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-02 00:51:51.894779 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-02 00:51:51.894790 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-02 00:51:51.894796 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-02 00:51:51.894803 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-02 00:51:51.894809 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-02 00:51:51.894816 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-02 00:51:51.894822 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-02 00:51:51.894832 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-02 00:51:51.894838 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-02 00:51:51.894845 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-02 00:51:51.894854 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-02 00:51:51.894864 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-02 00:51:51.894878 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-02 00:51:51.894890 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-02 00:51:51.894899 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-02 00:51:51.894909 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-02 00:51:51.894918 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-02 00:51:51.894926 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-02 00:51:51.894936 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-02 00:51:51.894944 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-02 00:51:51.894953 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-02 00:51:51.894968 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-02 00:51:51.894978 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-02 00:51:51.894987 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-02 00:51:51.894997 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-02 00:51:51.895006 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-02 00:51:51.895016 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-02 00:51:51.895025 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-02 00:51:51.895035 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-02 00:51:51.895043 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-02 00:51:51.895049 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-02 00:51:51.895060 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-02 00:51:51.895065 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-02 00:51:51.895070 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-02 00:51:51.895076 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-02 00:51:51.895081 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-02 00:51:51.895086 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-02 00:51:51.895092 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-02 00:51:51.895097 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-02 00:51:51.895103 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-02 00:51:51.895108 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-02 00:51:51.895113 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-02 00:51:51.895119 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-02 00:51:51.895129 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-02 00:51:51.895135 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-02 00:51:51.895140 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-02 00:51:51.895145 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-02 00:51:51.895151 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-02 00:51:51.895156 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-02 00:51:51.895162 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-02 00:51:51.895167 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-02 00:51:51.895172 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-02 00:51:51.895177 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-02 00:51:51.895183 | orchestrator | 2026-03-02 00:51:51.895188 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-02 00:51:51.895194 | orchestrator | Monday 02 March 2026 00:50:05 +0000 (0:00:20.116) 0:00:31.035 ********** 2026-03-02 00:51:51.895199 | orchestrator | 2026-03-02 00:51:51.895204 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-02 00:51:51.895210 | orchestrator | Monday 02 March 2026 00:50:05 +0000 (0:00:00.059) 0:00:31.094 ********** 2026-03-02 00:51:51.895215 | orchestrator | 2026-03-02 00:51:51.895220 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-02 00:51:51.895229 | orchestrator | Monday 02 March 2026 00:50:05 +0000 (0:00:00.056) 0:00:31.151 ********** 2026-03-02 00:51:51.895239 | orchestrator | 2026-03-02 00:51:51.895245 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-02 00:51:51.895250 | orchestrator | Monday 02 March 2026 00:50:05 +0000 (0:00:00.060) 0:00:31.211 ********** 2026-03-02 00:51:51.895255 | orchestrator | 2026-03-02 00:51:51.895261 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-02 00:51:51.895266 | orchestrator | Monday 02 March 2026 00:50:05 +0000 (0:00:00.056) 0:00:31.268 ********** 2026-03-02 00:51:51.895271 | orchestrator | 2026-03-02 00:51:51.895277 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-02 00:51:51.895282 | orchestrator | Monday 02 March 2026 00:50:05 +0000 (0:00:00.069) 0:00:31.338 ********** 2026-03-02 00:51:51.895287 | orchestrator | 2026-03-02 00:51:51.895293 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-02 00:51:51.895298 | orchestrator | Monday 02 March 2026 00:50:05 +0000 (0:00:00.059) 0:00:31.397 ********** 2026-03-02 00:51:51.895303 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:51:51.895309 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:51:51.895314 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:51:51.895391 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.895399 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.895404 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.895410 | orchestrator | 2026-03-02 00:51:51.895415 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-02 00:51:51.895422 | orchestrator | Monday 02 March 2026 00:50:07 +0000 (0:00:01.687) 0:00:33.085 ********** 2026-03-02 00:51:51.895430 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:51.895439 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:51:51.895448 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:51:51.895457 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:51.895466 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:51:51.895474 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:51.895483 | orchestrator | 2026-03-02 00:51:51.895492 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-02 00:51:51.895498 | orchestrator | 2026-03-02 00:51:51.895503 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-02 00:51:51.895508 | orchestrator | Monday 02 March 2026 00:50:33 +0000 (0:00:26.499) 0:00:59.584 ********** 2026-03-02 00:51:51.895514 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:51:51.895519 | orchestrator | 2026-03-02 00:51:51.895524 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-02 00:51:51.895530 | orchestrator | Monday 02 March 2026 00:50:34 +0000 (0:00:00.612) 0:01:00.196 ********** 2026-03-02 00:51:51.895535 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:51:51.895541 | orchestrator | 2026-03-02 00:51:51.895546 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-02 00:51:51.895552 | orchestrator | Monday 02 March 2026 00:50:34 +0000 (0:00:00.422) 0:01:00.619 ********** 2026-03-02 00:51:51.895561 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.895568 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.895581 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.895592 | orchestrator | 2026-03-02 00:51:51.895602 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-02 00:51:51.895610 | orchestrator | Monday 02 March 2026 00:50:35 +0000 (0:00:00.871) 0:01:01.490 ********** 2026-03-02 00:51:51.895619 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.895627 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.895636 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.895645 | orchestrator | 2026-03-02 00:51:51.895658 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-02 00:51:51.895664 | orchestrator | Monday 02 March 2026 00:50:35 +0000 (0:00:00.342) 0:01:01.832 ********** 2026-03-02 00:51:51.895676 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.895682 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.895687 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.895693 | orchestrator | 2026-03-02 00:51:51.895698 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-02 00:51:51.895703 | orchestrator | Monday 02 March 2026 00:50:36 +0000 (0:00:00.402) 0:01:02.235 ********** 2026-03-02 00:51:51.895709 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.895714 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.895719 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.895725 | orchestrator | 2026-03-02 00:51:51.895732 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-02 00:51:51.895741 | orchestrator | Monday 02 March 2026 00:50:36 +0000 (0:00:00.332) 0:01:02.568 ********** 2026-03-02 00:51:51.895749 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.895757 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.895764 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.895772 | orchestrator | 2026-03-02 00:51:51.895780 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-02 00:51:51.895789 | orchestrator | Monday 02 March 2026 00:50:37 +0000 (0:00:00.596) 0:01:03.165 ********** 2026-03-02 00:51:51.895797 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.895806 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.895814 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.895821 | orchestrator | 2026-03-02 00:51:51.895830 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-02 00:51:51.895837 | orchestrator | Monday 02 March 2026 00:50:37 +0000 (0:00:00.571) 0:01:03.737 ********** 2026-03-02 00:51:51.895845 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.895852 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.895860 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.895868 | orchestrator | 2026-03-02 00:51:51.895875 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-02 00:51:51.895883 | orchestrator | Monday 02 March 2026 00:50:38 +0000 (0:00:00.559) 0:01:04.297 ********** 2026-03-02 00:51:51.895891 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.895898 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.895906 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.895915 | orchestrator | 2026-03-02 00:51:51.895934 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-02 00:51:51.895941 | orchestrator | Monday 02 March 2026 00:50:38 +0000 (0:00:00.404) 0:01:04.701 ********** 2026-03-02 00:51:51.895949 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.895957 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.895964 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.895972 | orchestrator | 2026-03-02 00:51:51.895980 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-02 00:51:51.895989 | orchestrator | Monday 02 March 2026 00:50:39 +0000 (0:00:00.834) 0:01:05.536 ********** 2026-03-02 00:51:51.895997 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896008 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896014 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896019 | orchestrator | 2026-03-02 00:51:51.896025 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-02 00:51:51.896030 | orchestrator | Monday 02 March 2026 00:50:40 +0000 (0:00:00.520) 0:01:06.056 ********** 2026-03-02 00:51:51.896035 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896041 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896046 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896051 | orchestrator | 2026-03-02 00:51:51.896057 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-02 00:51:51.896062 | orchestrator | Monday 02 March 2026 00:50:40 +0000 (0:00:00.384) 0:01:06.440 ********** 2026-03-02 00:51:51.896068 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896073 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896084 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896090 | orchestrator | 2026-03-02 00:51:51.896095 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-02 00:51:51.896101 | orchestrator | Monday 02 March 2026 00:50:40 +0000 (0:00:00.274) 0:01:06.715 ********** 2026-03-02 00:51:51.896106 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896111 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896116 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896122 | orchestrator | 2026-03-02 00:51:51.896127 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-02 00:51:51.896133 | orchestrator | Monday 02 March 2026 00:50:41 +0000 (0:00:00.481) 0:01:07.196 ********** 2026-03-02 00:51:51.896138 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896143 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896149 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896154 | orchestrator | 2026-03-02 00:51:51.896160 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-02 00:51:51.896165 | orchestrator | Monday 02 March 2026 00:50:41 +0000 (0:00:00.271) 0:01:07.468 ********** 2026-03-02 00:51:51.896170 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896176 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896181 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896187 | orchestrator | 2026-03-02 00:51:51.896192 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-02 00:51:51.896198 | orchestrator | Monday 02 March 2026 00:50:41 +0000 (0:00:00.269) 0:01:07.737 ********** 2026-03-02 00:51:51.896203 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896226 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896238 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896244 | orchestrator | 2026-03-02 00:51:51.896250 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-02 00:51:51.896255 | orchestrator | Monday 02 March 2026 00:50:42 +0000 (0:00:00.240) 0:01:07.977 ********** 2026-03-02 00:51:51.896261 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896270 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896286 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896297 | orchestrator | 2026-03-02 00:51:51.896305 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-02 00:51:51.896313 | orchestrator | Monday 02 March 2026 00:50:42 +0000 (0:00:00.365) 0:01:08.342 ********** 2026-03-02 00:51:51.896340 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:51:51.896350 | orchestrator | 2026-03-02 00:51:51.896360 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-02 00:51:51.896368 | orchestrator | Monday 02 March 2026 00:50:42 +0000 (0:00:00.562) 0:01:08.904 ********** 2026-03-02 00:51:51.896378 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.896384 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.896389 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.896395 | orchestrator | 2026-03-02 00:51:51.896400 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-02 00:51:51.896408 | orchestrator | Monday 02 March 2026 00:50:43 +0000 (0:00:00.384) 0:01:09.289 ********** 2026-03-02 00:51:51.896418 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.896426 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.896435 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.896444 | orchestrator | 2026-03-02 00:51:51.896452 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-02 00:51:51.896460 | orchestrator | Monday 02 March 2026 00:50:43 +0000 (0:00:00.377) 0:01:09.667 ********** 2026-03-02 00:51:51.896470 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896480 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896490 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896507 | orchestrator | 2026-03-02 00:51:51.896513 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-02 00:51:51.896518 | orchestrator | Monday 02 March 2026 00:50:44 +0000 (0:00:00.448) 0:01:10.115 ********** 2026-03-02 00:51:51.896523 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896528 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896534 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896539 | orchestrator | 2026-03-02 00:51:51.896544 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-02 00:51:51.896550 | orchestrator | Monday 02 March 2026 00:50:44 +0000 (0:00:00.305) 0:01:10.420 ********** 2026-03-02 00:51:51.896555 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896560 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896566 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896571 | orchestrator | 2026-03-02 00:51:51.896582 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-02 00:51:51.896587 | orchestrator | Monday 02 March 2026 00:50:44 +0000 (0:00:00.317) 0:01:10.738 ********** 2026-03-02 00:51:51.896593 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896598 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896604 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896609 | orchestrator | 2026-03-02 00:51:51.896614 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-02 00:51:51.896620 | orchestrator | Monday 02 March 2026 00:50:45 +0000 (0:00:00.328) 0:01:11.066 ********** 2026-03-02 00:51:51.896625 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896631 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896636 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896641 | orchestrator | 2026-03-02 00:51:51.896647 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-02 00:51:51.896652 | orchestrator | Monday 02 March 2026 00:50:45 +0000 (0:00:00.444) 0:01:11.511 ********** 2026-03-02 00:51:51.896657 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.896663 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.896668 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.896676 | orchestrator | 2026-03-02 00:51:51.896684 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-02 00:51:51.896698 | orchestrator | Monday 02 March 2026 00:50:45 +0000 (0:00:00.280) 0:01:11.792 ********** 2026-03-02 00:51:51.896711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.896731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.896741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897069 | orchestrator | 2026-03-02 00:51:51.897075 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-02 00:51:51.897081 | orchestrator | Monday 02 March 2026 00:50:47 +0000 (0:00:01.492) 0:01:13.285 ********** 2026-03-02 00:51:51.897087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897177 | orchestrator | 2026-03-02 00:51:51.897186 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-02 00:51:51.897191 | orchestrator | Monday 02 March 2026 00:50:52 +0000 (0:00:04.828) 0:01:18.113 ********** 2026-03-02 00:51:51.897197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897257 | orchestrator | 2026-03-02 00:51:51.897263 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-02 00:51:51.897268 | orchestrator | Monday 02 March 2026 00:50:55 +0000 (0:00:02.813) 0:01:20.926 ********** 2026-03-02 00:51:51.897274 | orchestrator | 2026-03-02 00:51:51.897279 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-02 00:51:51.897284 | orchestrator | Monday 02 March 2026 00:50:55 +0000 (0:00:00.068) 0:01:20.995 ********** 2026-03-02 00:51:51.897290 | orchestrator | 2026-03-02 00:51:51.897298 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-02 00:51:51.897304 | orchestrator | Monday 02 March 2026 00:50:55 +0000 (0:00:00.063) 0:01:21.058 ********** 2026-03-02 00:51:51.897309 | orchestrator | 2026-03-02 00:51:51.897314 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-02 00:51:51.897339 | orchestrator | Monday 02 March 2026 00:50:55 +0000 (0:00:00.067) 0:01:21.125 ********** 2026-03-02 00:51:51.897349 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:51.897358 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:51.897364 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:51.897369 | orchestrator | 2026-03-02 00:51:51.897398 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-02 00:51:51.897405 | orchestrator | Monday 02 March 2026 00:51:02 +0000 (0:00:07.343) 0:01:28.469 ********** 2026-03-02 00:51:51.897410 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:51.897415 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:51.897421 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:51.897426 | orchestrator | 2026-03-02 00:51:51.897432 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-02 00:51:51.897437 | orchestrator | Monday 02 March 2026 00:51:10 +0000 (0:00:07.844) 0:01:36.313 ********** 2026-03-02 00:51:51.897443 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:51.897448 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:51.897459 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:51.897465 | orchestrator | 2026-03-02 00:51:51.897471 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-02 00:51:51.897476 | orchestrator | Monday 02 March 2026 00:51:12 +0000 (0:00:02.405) 0:01:38.719 ********** 2026-03-02 00:51:51.897482 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.897487 | orchestrator | 2026-03-02 00:51:51.897492 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-02 00:51:51.897498 | orchestrator | Monday 02 March 2026 00:51:13 +0000 (0:00:00.280) 0:01:38.999 ********** 2026-03-02 00:51:51.897503 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.897509 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.897515 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.897520 | orchestrator | 2026-03-02 00:51:51.897525 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-02 00:51:51.897531 | orchestrator | Monday 02 March 2026 00:51:13 +0000 (0:00:00.801) 0:01:39.801 ********** 2026-03-02 00:51:51.897536 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.897543 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.897553 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:51.897561 | orchestrator | 2026-03-02 00:51:51.897570 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-02 00:51:51.897579 | orchestrator | Monday 02 March 2026 00:51:14 +0000 (0:00:00.605) 0:01:40.407 ********** 2026-03-02 00:51:51.897587 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.897595 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.897604 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.897612 | orchestrator | 2026-03-02 00:51:51.897620 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-02 00:51:51.897630 | orchestrator | Monday 02 March 2026 00:51:15 +0000 (0:00:00.746) 0:01:41.154 ********** 2026-03-02 00:51:51.897638 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.897649 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.897658 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:51.897668 | orchestrator | 2026-03-02 00:51:51.897677 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-02 00:51:51.897684 | orchestrator | Monday 02 March 2026 00:51:16 +0000 (0:00:00.896) 0:01:42.050 ********** 2026-03-02 00:51:51.897690 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.897697 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.897709 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.897715 | orchestrator | 2026-03-02 00:51:51.897722 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-02 00:51:51.897728 | orchestrator | Monday 02 March 2026 00:51:16 +0000 (0:00:00.798) 0:01:42.849 ********** 2026-03-02 00:51:51.897735 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.897741 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.897751 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.897760 | orchestrator | 2026-03-02 00:51:51.897770 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-02 00:51:51.897780 | orchestrator | Monday 02 March 2026 00:51:17 +0000 (0:00:00.852) 0:01:43.702 ********** 2026-03-02 00:51:51.897787 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.897792 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.897798 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.897803 | orchestrator | 2026-03-02 00:51:51.897809 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-02 00:51:51.897814 | orchestrator | Monday 02 March 2026 00:51:18 +0000 (0:00:00.297) 0:01:43.999 ********** 2026-03-02 00:51:51.897820 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897831 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897838 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897843 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897850 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897856 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897862 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897868 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897902 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897910 | orchestrator | 2026-03-02 00:51:51.897915 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-02 00:51:51.897921 | orchestrator | Monday 02 March 2026 00:51:19 +0000 (0:00:01.639) 0:01:45.639 ********** 2026-03-02 00:51:51.897927 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897932 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897942 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897952 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897971 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.897988 | orchestrator | 2026-03-02 00:51:51.897994 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-02 00:51:51.897999 | orchestrator | Monday 02 March 2026 00:51:24 +0000 (0:00:04.767) 0:01:50.407 ********** 2026-03-02 00:51:51.898010 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.898060 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.898078 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.898084 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.898094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.898101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.898112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.898121 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.898131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 00:51:51.898140 | orchestrator | 2026-03-02 00:51:51.898149 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-02 00:51:51.898158 | orchestrator | Monday 02 March 2026 00:51:27 +0000 (0:00:03.019) 0:01:53.426 ********** 2026-03-02 00:51:51.898167 | orchestrator | 2026-03-02 00:51:51.898176 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-02 00:51:51.898186 | orchestrator | Monday 02 March 2026 00:51:27 +0000 (0:00:00.061) 0:01:53.488 ********** 2026-03-02 00:51:51.898196 | orchestrator | 2026-03-02 00:51:51.898206 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-02 00:51:51.898215 | orchestrator | Monday 02 March 2026 00:51:27 +0000 (0:00:00.061) 0:01:53.549 ********** 2026-03-02 00:51:51.898224 | orchestrator | 2026-03-02 00:51:51.898230 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-02 00:51:51.898242 | orchestrator | Monday 02 March 2026 00:51:27 +0000 (0:00:00.059) 0:01:53.608 ********** 2026-03-02 00:51:51.898248 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:51.898254 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:51.898259 | orchestrator | 2026-03-02 00:51:51.898270 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-02 00:51:51.898275 | orchestrator | Monday 02 March 2026 00:51:33 +0000 (0:00:06.150) 0:01:59.759 ********** 2026-03-02 00:51:51.898282 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:51.898288 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:51.898293 | orchestrator | 2026-03-02 00:51:51.898299 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-02 00:51:51.898304 | orchestrator | Monday 02 March 2026 00:51:40 +0000 (0:00:06.395) 0:02:06.154 ********** 2026-03-02 00:51:51.898310 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:51:51.898315 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:51:51.898372 | orchestrator | 2026-03-02 00:51:51.898380 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-02 00:51:51.898385 | orchestrator | Monday 02 March 2026 00:51:46 +0000 (0:00:06.378) 0:02:12.532 ********** 2026-03-02 00:51:51.898391 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:51:51.898397 | orchestrator | 2026-03-02 00:51:51.898402 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-02 00:51:51.898408 | orchestrator | Monday 02 March 2026 00:51:46 +0000 (0:00:00.110) 0:02:12.643 ********** 2026-03-02 00:51:51.898413 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.898419 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.898425 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.898430 | orchestrator | 2026-03-02 00:51:51.898436 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-02 00:51:51.898441 | orchestrator | Monday 02 March 2026 00:51:47 +0000 (0:00:00.787) 0:02:13.431 ********** 2026-03-02 00:51:51.898447 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.898452 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.898458 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:51.898463 | orchestrator | 2026-03-02 00:51:51.898469 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-02 00:51:51.898474 | orchestrator | Monday 02 March 2026 00:51:48 +0000 (0:00:00.634) 0:02:14.065 ********** 2026-03-02 00:51:51.898480 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.898485 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.898490 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.898496 | orchestrator | 2026-03-02 00:51:51.898501 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-02 00:51:51.898507 | orchestrator | Monday 02 March 2026 00:51:48 +0000 (0:00:00.787) 0:02:14.852 ********** 2026-03-02 00:51:51.898512 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:51:51.898522 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:51:51.898528 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:51:51.898535 | orchestrator | 2026-03-02 00:51:51.898544 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-02 00:51:51.898558 | orchestrator | Monday 02 March 2026 00:51:49 +0000 (0:00:00.682) 0:02:15.535 ********** 2026-03-02 00:51:51.898570 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.898578 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.898588 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.898596 | orchestrator | 2026-03-02 00:51:51.898604 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-02 00:51:51.898613 | orchestrator | Monday 02 March 2026 00:51:50 +0000 (0:00:00.796) 0:02:16.331 ********** 2026-03-02 00:51:51.898622 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:51:51.898630 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:51:51.898638 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:51:51.898647 | orchestrator | 2026-03-02 00:51:51.898656 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:51:51.898674 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-02 00:51:51.898683 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-02 00:51:51.898693 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-02 00:51:51.898702 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:51:51.898712 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:51:51.898721 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 00:51:51.898731 | orchestrator | 2026-03-02 00:51:51.898740 | orchestrator | 2026-03-02 00:51:51.898749 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:51:51.898759 | orchestrator | Monday 02 March 2026 00:51:51 +0000 (0:00:00.889) 0:02:17.221 ********** 2026-03-02 00:51:51.898769 | orchestrator | =============================================================================== 2026-03-02 00:51:51.898775 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 26.50s 2026-03-02 00:51:51.898780 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.12s 2026-03-02 00:51:51.898786 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.24s 2026-03-02 00:51:51.898791 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.49s 2026-03-02 00:51:51.898796 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.78s 2026-03-02 00:51:51.898802 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.83s 2026-03-02 00:51:51.898807 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.77s 2026-03-02 00:51:51.898819 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.02s 2026-03-02 00:51:51.898825 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.81s 2026-03-02 00:51:51.898830 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.18s 2026-03-02 00:51:51.898836 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.69s 2026-03-02 00:51:51.898844 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.64s 2026-03-02 00:51:51.898853 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.51s 2026-03-02 00:51:51.898864 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2026-03-02 00:51:51.898878 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.25s 2026-03-02 00:51:51.898886 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.20s 2026-03-02 00:51:51.898895 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.14s 2026-03-02 00:51:51.898903 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.10s 2026-03-02 00:51:51.898910 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 0.97s 2026-03-02 00:51:51.898917 | orchestrator | ovn-db : Configure OVN SB connection settings --------------------------- 0.90s 2026-03-02 00:51:51.898924 | orchestrator | 2026-03-02 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:54.932476 | orchestrator | 2026-03-02 00:51:54 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:54.932547 | orchestrator | 2026-03-02 00:51:54 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:54.932586 | orchestrator | 2026-03-02 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:51:57.963151 | orchestrator | 2026-03-02 00:51:57 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:51:57.964175 | orchestrator | 2026-03-02 00:51:57 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:51:57.964235 | orchestrator | 2026-03-02 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:01.006553 | orchestrator | 2026-03-02 00:52:01 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:01.008431 | orchestrator | 2026-03-02 00:52:01 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:01.008579 | orchestrator | 2026-03-02 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:04.050559 | orchestrator | 2026-03-02 00:52:04 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:04.050715 | orchestrator | 2026-03-02 00:52:04 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:04.050733 | orchestrator | 2026-03-02 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:07.090810 | orchestrator | 2026-03-02 00:52:07 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:07.092825 | orchestrator | 2026-03-02 00:52:07 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:07.092875 | orchestrator | 2026-03-02 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:10.129202 | orchestrator | 2026-03-02 00:52:10 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:10.138898 | orchestrator | 2026-03-02 00:52:10 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:10.139007 | orchestrator | 2026-03-02 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:13.172792 | orchestrator | 2026-03-02 00:52:13 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:13.174452 | orchestrator | 2026-03-02 00:52:13 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:13.174549 | orchestrator | 2026-03-02 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:16.223201 | orchestrator | 2026-03-02 00:52:16 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:16.224608 | orchestrator | 2026-03-02 00:52:16 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:16.225111 | orchestrator | 2026-03-02 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:19.268472 | orchestrator | 2026-03-02 00:52:19 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:19.269185 | orchestrator | 2026-03-02 00:52:19 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:19.270622 | orchestrator | 2026-03-02 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:22.320100 | orchestrator | 2026-03-02 00:52:22 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:22.321756 | orchestrator | 2026-03-02 00:52:22 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:22.321816 | orchestrator | 2026-03-02 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:25.359689 | orchestrator | 2026-03-02 00:52:25 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:25.359780 | orchestrator | 2026-03-02 00:52:25 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:25.359789 | orchestrator | 2026-03-02 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:28.389839 | orchestrator | 2026-03-02 00:52:28 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:28.392382 | orchestrator | 2026-03-02 00:52:28 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:28.392458 | orchestrator | 2026-03-02 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:31.424971 | orchestrator | 2026-03-02 00:52:31 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:31.425457 | orchestrator | 2026-03-02 00:52:31 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:31.425496 | orchestrator | 2026-03-02 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:34.465866 | orchestrator | 2026-03-02 00:52:34 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:34.467560 | orchestrator | 2026-03-02 00:52:34 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:34.467649 | orchestrator | 2026-03-02 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:37.509778 | orchestrator | 2026-03-02 00:52:37 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:37.510744 | orchestrator | 2026-03-02 00:52:37 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:37.512789 | orchestrator | 2026-03-02 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:40.547159 | orchestrator | 2026-03-02 00:52:40 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:40.548989 | orchestrator | 2026-03-02 00:52:40 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:40.549058 | orchestrator | 2026-03-02 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:43.593643 | orchestrator | 2026-03-02 00:52:43 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:43.593724 | orchestrator | 2026-03-02 00:52:43 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:43.593734 | orchestrator | 2026-03-02 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:46.639878 | orchestrator | 2026-03-02 00:52:46 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:46.640693 | orchestrator | 2026-03-02 00:52:46 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:46.640719 | orchestrator | 2026-03-02 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:49.698908 | orchestrator | 2026-03-02 00:52:49 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:49.701336 | orchestrator | 2026-03-02 00:52:49 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:49.701753 | orchestrator | 2026-03-02 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:52.742683 | orchestrator | 2026-03-02 00:52:52 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:52.745372 | orchestrator | 2026-03-02 00:52:52 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:52.745761 | orchestrator | 2026-03-02 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:55.782572 | orchestrator | 2026-03-02 00:52:55 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:55.784061 | orchestrator | 2026-03-02 00:52:55 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:55.784168 | orchestrator | 2026-03-02 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:52:58.817458 | orchestrator | 2026-03-02 00:52:58 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:52:58.819523 | orchestrator | 2026-03-02 00:52:58 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:52:58.819569 | orchestrator | 2026-03-02 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:01.869227 | orchestrator | 2026-03-02 00:53:01 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:01.869586 | orchestrator | 2026-03-02 00:53:01 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:01.869605 | orchestrator | 2026-03-02 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:04.918577 | orchestrator | 2026-03-02 00:53:04 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:04.918678 | orchestrator | 2026-03-02 00:53:04 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:04.918818 | orchestrator | 2026-03-02 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:07.960400 | orchestrator | 2026-03-02 00:53:07 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:07.961429 | orchestrator | 2026-03-02 00:53:07 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:07.961473 | orchestrator | 2026-03-02 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:11.011093 | orchestrator | 2026-03-02 00:53:11 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:11.013139 | orchestrator | 2026-03-02 00:53:11 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:11.013217 | orchestrator | 2026-03-02 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:14.048620 | orchestrator | 2026-03-02 00:53:14 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:14.049889 | orchestrator | 2026-03-02 00:53:14 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:14.049948 | orchestrator | 2026-03-02 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:17.095869 | orchestrator | 2026-03-02 00:53:17 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:17.097832 | orchestrator | 2026-03-02 00:53:17 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:17.097887 | orchestrator | 2026-03-02 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:20.142456 | orchestrator | 2026-03-02 00:53:20 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:20.144598 | orchestrator | 2026-03-02 00:53:20 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:20.144654 | orchestrator | 2026-03-02 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:23.190440 | orchestrator | 2026-03-02 00:53:23 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:23.192558 | orchestrator | 2026-03-02 00:53:23 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:23.192652 | orchestrator | 2026-03-02 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:26.242181 | orchestrator | 2026-03-02 00:53:26 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:26.244513 | orchestrator | 2026-03-02 00:53:26 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:26.244780 | orchestrator | 2026-03-02 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:29.281813 | orchestrator | 2026-03-02 00:53:29 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:29.282766 | orchestrator | 2026-03-02 00:53:29 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:29.282810 | orchestrator | 2026-03-02 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:32.332225 | orchestrator | 2026-03-02 00:53:32 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:32.332485 | orchestrator | 2026-03-02 00:53:32 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:32.332823 | orchestrator | 2026-03-02 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:35.388171 | orchestrator | 2026-03-02 00:53:35 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:35.390115 | orchestrator | 2026-03-02 00:53:35 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:35.390242 | orchestrator | 2026-03-02 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:38.431873 | orchestrator | 2026-03-02 00:53:38 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:38.432364 | orchestrator | 2026-03-02 00:53:38 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:38.432472 | orchestrator | 2026-03-02 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:41.479654 | orchestrator | 2026-03-02 00:53:41 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:41.479869 | orchestrator | 2026-03-02 00:53:41 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:41.479889 | orchestrator | 2026-03-02 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:44.508315 | orchestrator | 2026-03-02 00:53:44 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:44.510143 | orchestrator | 2026-03-02 00:53:44 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:44.510516 | orchestrator | 2026-03-02 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:47.544720 | orchestrator | 2026-03-02 00:53:47 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:47.546561 | orchestrator | 2026-03-02 00:53:47 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:47.546904 | orchestrator | 2026-03-02 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:50.585646 | orchestrator | 2026-03-02 00:53:50 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:50.587543 | orchestrator | 2026-03-02 00:53:50 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:50.587792 | orchestrator | 2026-03-02 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:53.622868 | orchestrator | 2026-03-02 00:53:53 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:53.624026 | orchestrator | 2026-03-02 00:53:53 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:53.624136 | orchestrator | 2026-03-02 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:56.662742 | orchestrator | 2026-03-02 00:53:56 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:56.662838 | orchestrator | 2026-03-02 00:53:56 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:56.662850 | orchestrator | 2026-03-02 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:53:59.698689 | orchestrator | 2026-03-02 00:53:59 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:53:59.700161 | orchestrator | 2026-03-02 00:53:59 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:53:59.700368 | orchestrator | 2026-03-02 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:02.735366 | orchestrator | 2026-03-02 00:54:02 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:02.735907 | orchestrator | 2026-03-02 00:54:02 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:54:02.736028 | orchestrator | 2026-03-02 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:05.765689 | orchestrator | 2026-03-02 00:54:05 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:05.766147 | orchestrator | 2026-03-02 00:54:05 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:54:05.766198 | orchestrator | 2026-03-02 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:08.809239 | orchestrator | 2026-03-02 00:54:08 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:08.810135 | orchestrator | 2026-03-02 00:54:08 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:54:08.810166 | orchestrator | 2026-03-02 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:11.882810 | orchestrator | 2026-03-02 00:54:11 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:11.882879 | orchestrator | 2026-03-02 00:54:11 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:54:11.882888 | orchestrator | 2026-03-02 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:14.921770 | orchestrator | 2026-03-02 00:54:14 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:14.921857 | orchestrator | 2026-03-02 00:54:14 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:54:14.921868 | orchestrator | 2026-03-02 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:17.961682 | orchestrator | 2026-03-02 00:54:17 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:17.961908 | orchestrator | 2026-03-02 00:54:17 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:54:17.961932 | orchestrator | 2026-03-02 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:21.001624 | orchestrator | 2026-03-02 00:54:21 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:21.003857 | orchestrator | 2026-03-02 00:54:21 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:54:21.003892 | orchestrator | 2026-03-02 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:24.049655 | orchestrator | 2026-03-02 00:54:24 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:24.051448 | orchestrator | 2026-03-02 00:54:24 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:54:24.051499 | orchestrator | 2026-03-02 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:27.092170 | orchestrator | 2026-03-02 00:54:27 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:27.093766 | orchestrator | 2026-03-02 00:54:27 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:54:27.093967 | orchestrator | 2026-03-02 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:30.125716 | orchestrator | 2026-03-02 00:54:30 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:30.125946 | orchestrator | 2026-03-02 00:54:30 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state STARTED 2026-03-02 00:54:30.125971 | orchestrator | 2026-03-02 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:33.166801 | orchestrator | 2026-03-02 00:54:33 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:33.170315 | orchestrator | 2026-03-02 00:54:33 | INFO  | Task 68f3d702-dab6-4b09-b3ad-3691040cc299 is in state SUCCESS 2026-03-02 00:54:33.171483 | orchestrator | 2026-03-02 00:54:33.171524 | orchestrator | 2026-03-02 00:54:33.171532 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:54:33.171539 | orchestrator | 2026-03-02 00:54:33.171545 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 00:54:33.171553 | orchestrator | Monday 02 March 2026 00:48:24 +0000 (0:00:00.250) 0:00:00.250 ********** 2026-03-02 00:54:33.171559 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.171567 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.171573 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.171579 | orchestrator | 2026-03-02 00:54:33.171585 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 00:54:33.171590 | orchestrator | Monday 02 March 2026 00:48:25 +0000 (0:00:00.365) 0:00:00.616 ********** 2026-03-02 00:54:33.171597 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-02 00:54:33.171603 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-02 00:54:33.171609 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-02 00:54:33.171615 | orchestrator | 2026-03-02 00:54:33.171622 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-02 00:54:33.171628 | orchestrator | 2026-03-02 00:54:33.171635 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-02 00:54:33.171641 | orchestrator | Monday 02 March 2026 00:48:25 +0000 (0:00:00.512) 0:00:01.129 ********** 2026-03-02 00:54:33.171648 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.171653 | orchestrator | 2026-03-02 00:54:33.171660 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-02 00:54:33.171666 | orchestrator | Monday 02 March 2026 00:48:26 +0000 (0:00:00.942) 0:00:02.071 ********** 2026-03-02 00:54:33.171671 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.171677 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.171684 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.171689 | orchestrator | 2026-03-02 00:54:33.171695 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-02 00:54:33.171702 | orchestrator | Monday 02 March 2026 00:48:27 +0000 (0:00:00.922) 0:00:02.994 ********** 2026-03-02 00:54:33.171708 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.171714 | orchestrator | 2026-03-02 00:54:33.171720 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-02 00:54:33.171749 | orchestrator | Monday 02 March 2026 00:48:28 +0000 (0:00:00.641) 0:00:03.636 ********** 2026-03-02 00:54:33.171756 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.171761 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.171767 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.171772 | orchestrator | 2026-03-02 00:54:33.171778 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-02 00:54:33.171784 | orchestrator | Monday 02 March 2026 00:48:29 +0000 (0:00:01.469) 0:00:05.106 ********** 2026-03-02 00:54:33.171789 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-02 00:54:33.171796 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-02 00:54:33.171802 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-02 00:54:33.171808 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-02 00:54:33.171814 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-02 00:54:33.171820 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-02 00:54:33.171828 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-02 00:54:33.171834 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-02 00:54:33.171841 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-02 00:54:33.171846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-02 00:54:33.171852 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-02 00:54:33.171857 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-02 00:54:33.171862 | orchestrator | 2026-03-02 00:54:33.171869 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-02 00:54:33.171890 | orchestrator | Monday 02 March 2026 00:48:32 +0000 (0:00:03.182) 0:00:08.288 ********** 2026-03-02 00:54:33.171897 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-02 00:54:33.171903 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-02 00:54:33.171909 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-02 00:54:33.171915 | orchestrator | 2026-03-02 00:54:33.172144 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-02 00:54:33.172165 | orchestrator | Monday 02 March 2026 00:48:33 +0000 (0:00:01.044) 0:00:09.332 ********** 2026-03-02 00:54:33.172173 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-02 00:54:33.172182 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-02 00:54:33.172189 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-02 00:54:33.172196 | orchestrator | 2026-03-02 00:54:33.172204 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-02 00:54:33.172212 | orchestrator | Monday 02 March 2026 00:48:35 +0000 (0:00:01.639) 0:00:10.972 ********** 2026-03-02 00:54:33.172219 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-02 00:54:33.172226 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.172249 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-02 00:54:33.172257 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.172265 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-02 00:54:33.172272 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.172278 | orchestrator | 2026-03-02 00:54:33.172285 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-02 00:54:33.172291 | orchestrator | Monday 02 March 2026 00:48:36 +0000 (0:00:01.028) 0:00:12.001 ********** 2026-03-02 00:54:33.172301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.172391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.172398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.172405 | orchestrator | 2026-03-02 00:54:33.172411 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-02 00:54:33.172418 | orchestrator | Monday 02 March 2026 00:48:39 +0000 (0:00:02.892) 0:00:14.893 ********** 2026-03-02 00:54:33.172424 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.172431 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.172437 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.172443 | orchestrator | 2026-03-02 00:54:33.172450 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-02 00:54:33.172457 | orchestrator | Monday 02 March 2026 00:48:40 +0000 (0:00:01.335) 0:00:16.229 ********** 2026-03-02 00:54:33.172464 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-02 00:54:33.172470 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-02 00:54:33.172477 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-02 00:54:33.172484 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-02 00:54:33.172547 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-02 00:54:33.172557 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-02 00:54:33.172563 | orchestrator | 2026-03-02 00:54:33.172570 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-02 00:54:33.172576 | orchestrator | Monday 02 March 2026 00:48:42 +0000 (0:00:02.031) 0:00:18.260 ********** 2026-03-02 00:54:33.172617 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.172624 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.172631 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.172635 | orchestrator | 2026-03-02 00:54:33.172638 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-02 00:54:33.172642 | orchestrator | Monday 02 March 2026 00:48:43 +0000 (0:00:00.938) 0:00:19.199 ********** 2026-03-02 00:54:33.172646 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.172650 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.172654 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.172657 | orchestrator | 2026-03-02 00:54:33.172661 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-02 00:54:33.172665 | orchestrator | Monday 02 March 2026 00:48:45 +0000 (0:00:01.380) 0:00:20.579 ********** 2026-03-02 00:54:33.172674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.172699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.172704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.172709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-02 00:54:33.172731 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.172735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.172740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.172744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.172755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-02 00:54:33.172759 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.172768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.172773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.172777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.172781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-02 00:54:33.172785 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.172788 | orchestrator | 2026-03-02 00:54:33.172792 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-02 00:54:33.172796 | orchestrator | Monday 02 March 2026 00:48:46 +0000 (0:00:01.590) 0:00:22.169 ********** 2026-03-02 00:54:33.172800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.172838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-02 00:54:33.172844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.172870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-02 00:54:33.172881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.172914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588', '__omit_place_holder__76f05abdfd1ea27fcf4bf1e4d35b6df02e2e6588'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-02 00:54:33.172921 | orchestrator | 2026-03-02 00:54:33.172927 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-02 00:54:33.172933 | orchestrator | Monday 02 March 2026 00:48:49 +0000 (0:00:02.857) 0:00:25.026 ********** 2026-03-02 00:54:33.172939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.172985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.173041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.173049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.173054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.173062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.173067 | orchestrator | 2026-03-02 00:54:33.173073 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-02 00:54:33.173078 | orchestrator | Monday 02 March 2026 00:48:52 +0000 (0:00:03.010) 0:00:28.037 ********** 2026-03-02 00:54:33.173119 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-02 00:54:33.173206 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-02 00:54:33.173216 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-02 00:54:33.173222 | orchestrator | 2026-03-02 00:54:33.173228 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-02 00:54:33.173234 | orchestrator | Monday 02 March 2026 00:48:55 +0000 (0:00:02.811) 0:00:30.849 ********** 2026-03-02 00:54:33.173240 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-02 00:54:33.173247 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-02 00:54:33.173253 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-02 00:54:33.173260 | orchestrator | 2026-03-02 00:54:33.173858 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-02 00:54:33.173883 | orchestrator | Monday 02 March 2026 00:49:00 +0000 (0:00:05.610) 0:00:36.459 ********** 2026-03-02 00:54:33.173890 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.173896 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.173902 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.173909 | orchestrator | 2026-03-02 00:54:33.173916 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-02 00:54:33.173922 | orchestrator | Monday 02 March 2026 00:49:01 +0000 (0:00:00.626) 0:00:37.086 ********** 2026-03-02 00:54:33.173928 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-02 00:54:33.173935 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-02 00:54:33.173941 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-02 00:54:33.173947 | orchestrator | 2026-03-02 00:54:33.173953 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-02 00:54:33.173959 | orchestrator | Monday 02 March 2026 00:49:03 +0000 (0:00:02.381) 0:00:39.467 ********** 2026-03-02 00:54:33.173966 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-02 00:54:33.173973 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-02 00:54:33.173978 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-02 00:54:33.174093 | orchestrator | 2026-03-02 00:54:33.174102 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-02 00:54:33.174108 | orchestrator | Monday 02 March 2026 00:49:06 +0000 (0:00:02.517) 0:00:41.984 ********** 2026-03-02 00:54:33.174114 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-02 00:54:33.174181 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-02 00:54:33.174189 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-02 00:54:33.174195 | orchestrator | 2026-03-02 00:54:33.174223 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-02 00:54:33.174230 | orchestrator | Monday 02 March 2026 00:49:09 +0000 (0:00:02.811) 0:00:44.796 ********** 2026-03-02 00:54:33.174235 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-02 00:54:33.174242 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-02 00:54:33.174248 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-02 00:54:33.174254 | orchestrator | 2026-03-02 00:54:33.174266 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-02 00:54:33.174281 | orchestrator | Monday 02 March 2026 00:49:10 +0000 (0:00:01.540) 0:00:46.337 ********** 2026-03-02 00:54:33.174288 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.174312 | orchestrator | 2026-03-02 00:54:33.174319 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-02 00:54:33.174325 | orchestrator | Monday 02 March 2026 00:49:11 +0000 (0:00:00.677) 0:00:47.014 ********** 2026-03-02 00:54:33.174333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.174343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.174407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.174418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.174434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.174440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.174447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.174454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.174464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.174471 | orchestrator | 2026-03-02 00:54:33.174477 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-02 00:54:33.174483 | orchestrator | Monday 02 March 2026 00:49:15 +0000 (0:00:03.671) 0:00:50.685 ********** 2026-03-02 00:54:33.174499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.174513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.174520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.174526 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.174533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.174540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.174593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.174600 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.174607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.174619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.174631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.174638 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.174644 | orchestrator | 2026-03-02 00:54:33.174650 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-02 00:54:33.174656 | orchestrator | Monday 02 March 2026 00:49:16 +0000 (0:00:01.128) 0:00:51.814 ********** 2026-03-02 00:54:33.174662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.174669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.174675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.174682 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.174691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.174702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.174742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.174751 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.174757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.174765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.174773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.174779 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.174786 | orchestrator | 2026-03-02 00:54:33.174867 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-02 00:54:33.174874 | orchestrator | Monday 02 March 2026 00:49:17 +0000 (0:00:01.656) 0:00:53.470 ********** 2026-03-02 00:54:33.174885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.174897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.174909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.174915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.174921 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.174927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.174933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.174939 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.174945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.174955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.174970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.174978 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.174985 | orchestrator | 2026-03-02 00:54:33.174991 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-02 00:54:33.175019 | orchestrator | Monday 02 March 2026 00:49:19 +0000 (0:00:01.578) 0:00:55.049 ********** 2026-03-02 00:54:33.175027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175045 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.175051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175081 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.175091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175109 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.175114 | orchestrator | 2026-03-02 00:54:33.175120 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-02 00:54:33.175125 | orchestrator | Monday 02 March 2026 00:49:20 +0000 (0:00:00.697) 0:00:55.747 ********** 2026-03-02 00:54:33.175131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175156 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.175165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175184 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.175191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175216 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.175222 | orchestrator | 2026-03-02 00:54:33.175228 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-02 00:54:33.175266 | orchestrator | Monday 02 March 2026 00:49:21 +0000 (0:00:00.878) 0:00:56.625 ********** 2026-03-02 00:54:33.175276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175302 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.175309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175333 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.175342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175461 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.175466 | orchestrator | 2026-03-02 00:54:33.175473 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-02 00:54:33.175479 | orchestrator | Monday 02 March 2026 00:49:23 +0000 (0:00:02.099) 0:00:58.724 ********** 2026-03-02 00:54:33.175485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175509 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.175516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175547 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.175554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175580 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.175586 | orchestrator | 2026-03-02 00:54:33.175593 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-02 00:54:33.175600 | orchestrator | Monday 02 March 2026 00:49:24 +0000 (0:00:01.041) 0:00:59.766 ********** 2026-03-02 00:54:33.175608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175633 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.175644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175671 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.175679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-02 00:54:33.175686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-02 00:54:33.175696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-02 00:54:33.175704 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.175711 | orchestrator | 2026-03-02 00:54:33.175719 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-02 00:54:33.175725 | orchestrator | Monday 02 March 2026 00:49:25 +0000 (0:00:00.999) 0:01:00.765 ********** 2026-03-02 00:54:33.175733 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-02 00:54:33.175739 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-02 00:54:33.175750 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-02 00:54:33.175757 | orchestrator | 2026-03-02 00:54:33.175763 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-02 00:54:33.175770 | orchestrator | Monday 02 March 2026 00:49:27 +0000 (0:00:01.889) 0:01:02.655 ********** 2026-03-02 00:54:33.175777 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-02 00:54:33.175783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-02 00:54:33.175789 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-02 00:54:33.175877 | orchestrator | 2026-03-02 00:54:33.175886 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-02 00:54:33.175899 | orchestrator | Monday 02 March 2026 00:49:28 +0000 (0:00:01.509) 0:01:04.165 ********** 2026-03-02 00:54:33.175929 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-02 00:54:33.175954 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-02 00:54:33.175961 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-02 00:54:33.175969 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-02 00:54:33.175976 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.175984 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-02 00:54:33.175991 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.176016 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-02 00:54:33.176023 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.176030 | orchestrator | 2026-03-02 00:54:33.176037 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-02 00:54:33.176044 | orchestrator | Monday 02 March 2026 00:49:29 +0000 (0:00:00.847) 0:01:05.012 ********** 2026-03-02 00:54:33.176051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.176058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.176070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-02 00:54:33.176084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.176091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.176107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-02 00:54:33.176113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.176121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.176127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-02 00:54:33.176134 | orchestrator | 2026-03-02 00:54:33.176140 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-02 00:54:33.176146 | orchestrator | Monday 02 March 2026 00:49:32 +0000 (0:00:02.807) 0:01:07.820 ********** 2026-03-02 00:54:33.176153 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.176160 | orchestrator | 2026-03-02 00:54:33.176166 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-02 00:54:33.176173 | orchestrator | Monday 02 March 2026 00:49:32 +0000 (0:00:00.554) 0:01:08.375 ********** 2026-03-02 00:54:33.176184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-02 00:54:33.176202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.176210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.176217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.176243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-02 00:54:33.176251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.176262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.176280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-02 00:54:33.177507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.177512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177521 | orchestrator | 2026-03-02 00:54:33.177525 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-02 00:54:33.177529 | orchestrator | Monday 02 March 2026 00:49:36 +0000 (0:00:03.945) 0:01:12.321 ********** 2026-03-02 00:54:33.177537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-02 00:54:33.177549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-02 00:54:33.177561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.177565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.177569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177586 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.177590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177594 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.177598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-02 00:54:33.177605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.177609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177617 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.177621 | orchestrator | 2026-03-02 00:54:33.177624 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-02 00:54:33.177628 | orchestrator | Monday 02 March 2026 00:49:37 +0000 (0:00:01.070) 0:01:13.391 ********** 2026-03-02 00:54:33.177633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-02 00:54:33.177641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-02 00:54:33.177646 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.177650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-02 00:54:33.177654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-02 00:54:33.177658 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.177661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-02 00:54:33.177665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-02 00:54:33.177669 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.177673 | orchestrator | 2026-03-02 00:54:33.177676 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-02 00:54:33.177680 | orchestrator | Monday 02 March 2026 00:49:38 +0000 (0:00:01.099) 0:01:14.491 ********** 2026-03-02 00:54:33.177684 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.177688 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.177692 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.177697 | orchestrator | 2026-03-02 00:54:33.177702 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-02 00:54:33.177708 | orchestrator | Monday 02 March 2026 00:49:40 +0000 (0:00:01.046) 0:01:15.537 ********** 2026-03-02 00:54:33.177714 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.177722 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.177730 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.177736 | orchestrator | 2026-03-02 00:54:33.177742 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-02 00:54:33.177748 | orchestrator | Monday 02 March 2026 00:49:41 +0000 (0:00:01.677) 0:01:17.214 ********** 2026-03-02 00:54:33.177754 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.177839 | orchestrator | 2026-03-02 00:54:33.177849 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-02 00:54:33.177860 | orchestrator | Monday 02 March 2026 00:49:42 +0000 (0:00:00.742) 0:01:17.957 ********** 2026-03-02 00:54:33.177868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.177876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.177924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.177938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177963 | orchestrator | 2026-03-02 00:54:33.177969 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-02 00:54:33.177973 | orchestrator | Monday 02 March 2026 00:49:45 +0000 (0:00:03.144) 0:01:21.101 ********** 2026-03-02 00:54:33.177977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.177984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.177993 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.178064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.178082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178102 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.178109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.178122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178141 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.178148 | orchestrator | 2026-03-02 00:54:33.178154 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-02 00:54:33.178161 | orchestrator | Monday 02 March 2026 00:49:46 +0000 (0:00:01.210) 0:01:22.312 ********** 2026-03-02 00:54:33.178168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-02 00:54:33.178176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-02 00:54:33.178184 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.178191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-02 00:54:33.178198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-02 00:54:33.178204 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.178210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-02 00:54:33.178221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-02 00:54:33.178228 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.178234 | orchestrator | 2026-03-02 00:54:33.178240 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-02 00:54:33.178245 | orchestrator | Monday 02 March 2026 00:49:47 +0000 (0:00:00.995) 0:01:23.307 ********** 2026-03-02 00:54:33.178251 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.178258 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.178264 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.178271 | orchestrator | 2026-03-02 00:54:33.178277 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-02 00:54:33.178284 | orchestrator | Monday 02 March 2026 00:49:49 +0000 (0:00:01.382) 0:01:24.690 ********** 2026-03-02 00:54:33.178290 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.178297 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.178303 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.178335 | orchestrator | 2026-03-02 00:54:33.178341 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-02 00:54:33.178345 | orchestrator | Monday 02 March 2026 00:49:51 +0000 (0:00:01.861) 0:01:26.551 ********** 2026-03-02 00:54:33.178348 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.178352 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.178356 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.178360 | orchestrator | 2026-03-02 00:54:33.178364 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-02 00:54:33.178368 | orchestrator | Monday 02 March 2026 00:49:51 +0000 (0:00:00.274) 0:01:26.826 ********** 2026-03-02 00:54:33.178371 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.178381 | orchestrator | 2026-03-02 00:54:33.178384 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-02 00:54:33.178388 | orchestrator | Monday 02 March 2026 00:49:52 +0000 (0:00:00.739) 0:01:27.565 ********** 2026-03-02 00:54:33.178396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-02 00:54:33.178402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-02 00:54:33.178406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-02 00:54:33.178410 | orchestrator | 2026-03-02 00:54:33.178414 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-02 00:54:33.178421 | orchestrator | Monday 02 March 2026 00:49:54 +0000 (0:00:02.379) 0:01:29.945 ********** 2026-03-02 00:54:33.178426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-02 00:54:33.178430 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.178440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-02 00:54:33.178446 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.178452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-02 00:54:33.178461 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.178496 | orchestrator | 2026-03-02 00:54:33.178502 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-02 00:54:33.178509 | orchestrator | Monday 02 March 2026 00:49:55 +0000 (0:00:01.287) 0:01:31.233 ********** 2026-03-02 00:54:33.178516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-02 00:54:33.178524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-02 00:54:33.178533 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.178539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-02 00:54:33.178551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-02 00:54:33.178556 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.178560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-02 00:54:33.178570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-02 00:54:33.178577 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.178652 | orchestrator | 2026-03-02 00:54:33.178659 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-02 00:54:33.178665 | orchestrator | Monday 02 March 2026 00:49:57 +0000 (0:00:01.502) 0:01:32.735 ********** 2026-03-02 00:54:33.178672 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.178678 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.178690 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.178697 | orchestrator | 2026-03-02 00:54:33.178703 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-02 00:54:33.178709 | orchestrator | Monday 02 March 2026 00:49:57 +0000 (0:00:00.602) 0:01:33.338 ********** 2026-03-02 00:54:33.178717 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.178727 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.178735 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.178741 | orchestrator | 2026-03-02 00:54:33.178747 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-02 00:54:33.178754 | orchestrator | Monday 02 March 2026 00:49:58 +0000 (0:00:01.180) 0:01:34.518 ********** 2026-03-02 00:54:33.178760 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.178767 | orchestrator | 2026-03-02 00:54:33.178773 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-02 00:54:33.178781 | orchestrator | Monday 02 March 2026 00:49:59 +0000 (0:00:00.743) 0:01:35.262 ********** 2026-03-02 00:54:33.178793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.178803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.178821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.178898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178921 | orchestrator | 2026-03-02 00:54:33.178927 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-02 00:54:33.178933 | orchestrator | Monday 02 March 2026 00:50:02 +0000 (0:00:03.159) 0:01:38.421 ********** 2026-03-02 00:54:33.178939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.178955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.178985 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.178991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.179189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179313 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.179320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.179336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179363 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.179370 | orchestrator | 2026-03-02 00:54:33.179376 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-02 00:54:33.179383 | orchestrator | Monday 02 March 2026 00:50:03 +0000 (0:00:00.800) 0:01:39.222 ********** 2026-03-02 00:54:33.179391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-02 00:54:33.179402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-02 00:54:33.179410 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.179417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-02 00:54:33.179423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-02 00:54:33.179430 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.179436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-02 00:54:33.179443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-02 00:54:33.179450 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.179457 | orchestrator | 2026-03-02 00:54:33.179463 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-02 00:54:33.179470 | orchestrator | Monday 02 March 2026 00:50:04 +0000 (0:00:00.781) 0:01:40.003 ********** 2026-03-02 00:54:33.179476 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.179483 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.179490 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.179496 | orchestrator | 2026-03-02 00:54:33.179503 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-02 00:54:33.179510 | orchestrator | Monday 02 March 2026 00:50:05 +0000 (0:00:01.152) 0:01:41.156 ********** 2026-03-02 00:54:33.179517 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.179523 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.179530 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.179536 | orchestrator | 2026-03-02 00:54:33.179556 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-02 00:54:33.179564 | orchestrator | Monday 02 March 2026 00:50:07 +0000 (0:00:02.176) 0:01:43.333 ********** 2026-03-02 00:54:33.179570 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.179576 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.179582 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.179588 | orchestrator | 2026-03-02 00:54:33.179594 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-02 00:54:33.179601 | orchestrator | Monday 02 March 2026 00:50:08 +0000 (0:00:00.766) 0:01:44.100 ********** 2026-03-02 00:54:33.179607 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.179613 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.179626 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.179632 | orchestrator | 2026-03-02 00:54:33.179638 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-02 00:54:33.179645 | orchestrator | Monday 02 March 2026 00:50:08 +0000 (0:00:00.325) 0:01:44.425 ********** 2026-03-02 00:54:33.179651 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.179657 | orchestrator | 2026-03-02 00:54:33.179663 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-02 00:54:33.179669 | orchestrator | Monday 02 March 2026 00:50:09 +0000 (0:00:00.725) 0:01:45.150 ********** 2026-03-02 00:54:33.179676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 00:54:33.179684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 00:54:33.179718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 00:54:33.179772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 00:54:33.179779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 00:54:33.179927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 00:54:33.179933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.179977 | orchestrator | 2026-03-02 00:54:33.179983 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-02 00:54:33.179990 | orchestrator | Monday 02 March 2026 00:50:14 +0000 (0:00:04.481) 0:01:49.632 ********** 2026-03-02 00:54:33.180070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 00:54:33.180079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 00:54:33.180096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 00:54:33.180130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 00:54:33.180151 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.180158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180191 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.180198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 00:54:33.180217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 00:54:33.180224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.180263 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.180268 | orchestrator | 2026-03-02 00:54:33.180275 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-02 00:54:33.180282 | orchestrator | Monday 02 March 2026 00:50:14 +0000 (0:00:00.804) 0:01:50.437 ********** 2026-03-02 00:54:33.180288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-02 00:54:33.180295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-02 00:54:33.180302 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.180312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-02 00:54:33.180318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-02 00:54:33.180323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-02 00:54:33.180329 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.180335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-02 00:54:33.180340 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.180346 | orchestrator | 2026-03-02 00:54:33.180351 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-02 00:54:33.180357 | orchestrator | Monday 02 March 2026 00:50:15 +0000 (0:00:00.911) 0:01:51.348 ********** 2026-03-02 00:54:33.180362 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.180369 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.180374 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.180380 | orchestrator | 2026-03-02 00:54:33.180386 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-02 00:54:33.180392 | orchestrator | Monday 02 March 2026 00:50:17 +0000 (0:00:01.540) 0:01:52.889 ********** 2026-03-02 00:54:33.180398 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.180404 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.180410 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.180416 | orchestrator | 2026-03-02 00:54:33.180421 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-02 00:54:33.180427 | orchestrator | Monday 02 March 2026 00:50:19 +0000 (0:00:01.826) 0:01:54.715 ********** 2026-03-02 00:54:33.180434 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.180440 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.180465 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.180472 | orchestrator | 2026-03-02 00:54:33.180483 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-02 00:54:33.180490 | orchestrator | Monday 02 March 2026 00:50:19 +0000 (0:00:00.404) 0:01:55.119 ********** 2026-03-02 00:54:33.180496 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.180502 | orchestrator | 2026-03-02 00:54:33.180515 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-02 00:54:33.180551 | orchestrator | Monday 02 March 2026 00:50:20 +0000 (0:00:00.795) 0:01:55.915 ********** 2026-03-02 00:54:33.180565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 00:54:33.180581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.180593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 00:54:33.180609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.180621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 00:54:33.180638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.180645 | orchestrator | 2026-03-02 00:54:33.180651 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-02 00:54:33.180657 | orchestrator | Monday 02 March 2026 00:50:25 +0000 (0:00:04.858) 0:02:00.774 ********** 2026-03-02 00:54:33.180664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-02 00:54:33.180702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.180709 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.180724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-02 00:54:33.180739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.180746 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.182136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-02 00:54:33.182212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.182222 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.182228 | orchestrator | 2026-03-02 00:54:33.182233 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-02 00:54:33.182239 | orchestrator | Monday 02 March 2026 00:50:28 +0000 (0:00:03.729) 0:02:04.503 ********** 2026-03-02 00:54:33.182245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-02 00:54:33.182260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-02 00:54:33.182266 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.182271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-02 00:54:33.182290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-02 00:54:33.182299 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.182304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-02 00:54:33.182313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-02 00:54:33.182318 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.182322 | orchestrator | 2026-03-02 00:54:33.182327 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-02 00:54:33.182332 | orchestrator | Monday 02 March 2026 00:50:32 +0000 (0:00:03.623) 0:02:08.127 ********** 2026-03-02 00:54:33.182336 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.182341 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.182345 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.182350 | orchestrator | 2026-03-02 00:54:33.182355 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-02 00:54:33.182359 | orchestrator | Monday 02 March 2026 00:50:33 +0000 (0:00:01.167) 0:02:09.294 ********** 2026-03-02 00:54:33.182364 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.182368 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.182373 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.182377 | orchestrator | 2026-03-02 00:54:33.182382 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-02 00:54:33.182386 | orchestrator | Monday 02 March 2026 00:50:35 +0000 (0:00:01.885) 0:02:11.180 ********** 2026-03-02 00:54:33.182391 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.182395 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.182400 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.182404 | orchestrator | 2026-03-02 00:54:33.182409 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-02 00:54:33.182414 | orchestrator | Monday 02 March 2026 00:50:36 +0000 (0:00:00.463) 0:02:11.643 ********** 2026-03-02 00:54:33.182418 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.182423 | orchestrator | 2026-03-02 00:54:33.182427 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-02 00:54:33.182432 | orchestrator | Monday 02 March 2026 00:50:37 +0000 (0:00:01.106) 0:02:12.750 ********** 2026-03-02 00:54:33.182443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 00:54:33.182454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 00:54:33.182459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 00:54:33.182464 | orchestrator | 2026-03-02 00:54:33.182468 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-02 00:54:33.182473 | orchestrator | Monday 02 March 2026 00:50:41 +0000 (0:00:04.517) 0:02:17.268 ********** 2026-03-02 00:54:33.182480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-02 00:54:33.182485 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.182490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-02 00:54:33.182495 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.182499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-02 00:54:33.182504 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.182513 | orchestrator | 2026-03-02 00:54:33.182521 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-02 00:54:33.182526 | orchestrator | Monday 02 March 2026 00:50:42 +0000 (0:00:00.639) 0:02:17.908 ********** 2026-03-02 00:54:33.182531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-02 00:54:33.182536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-02 00:54:33.182542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-02 00:54:33.182546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-02 00:54:33.182551 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.182555 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.182560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-02 00:54:33.182565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-02 00:54:33.182569 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.182574 | orchestrator | 2026-03-02 00:54:33.182578 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-02 00:54:33.182583 | orchestrator | Monday 02 March 2026 00:50:42 +0000 (0:00:00.543) 0:02:18.452 ********** 2026-03-02 00:54:33.182587 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.182592 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.182596 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.182601 | orchestrator | 2026-03-02 00:54:33.182605 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-02 00:54:33.182610 | orchestrator | Monday 02 March 2026 00:50:44 +0000 (0:00:01.459) 0:02:19.911 ********** 2026-03-02 00:54:33.182614 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.182619 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.182623 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.182628 | orchestrator | 2026-03-02 00:54:33.182632 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-02 00:54:33.182637 | orchestrator | Monday 02 March 2026 00:50:46 +0000 (0:00:01.824) 0:02:21.736 ********** 2026-03-02 00:54:33.182642 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.182646 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.182651 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.182655 | orchestrator | 2026-03-02 00:54:33.182660 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-02 00:54:33.182664 | orchestrator | Monday 02 March 2026 00:50:46 +0000 (0:00:00.410) 0:02:22.147 ********** 2026-03-02 00:54:33.182672 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.182676 | orchestrator | 2026-03-02 00:54:33.182681 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-02 00:54:33.182685 | orchestrator | Monday 02 March 2026 00:50:47 +0000 (0:00:00.820) 0:02:22.967 ********** 2026-03-02 00:54:33.182695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:54:33.182708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:54:33.182742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:54:33.182748 | orchestrator | 2026-03-02 00:54:33.182753 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-02 00:54:33.182758 | orchestrator | Monday 02 March 2026 00:50:52 +0000 (0:00:04.587) 0:02:27.555 ********** 2026-03-02 00:54:33.182766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-02 00:54:33.182775 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.182784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-02 00:54:33.182797 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.182805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-02 00:54:33.182813 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.182818 | orchestrator | 2026-03-02 00:54:33.182823 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-02 00:54:33.182827 | orchestrator | Monday 02 March 2026 00:50:53 +0000 (0:00:01.393) 0:02:28.948 ********** 2026-03-02 00:54:33.182836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-02 00:54:33.182842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-02 00:54:33.182848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-02 00:54:33.182853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-02 00:54:33.182858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-02 00:54:33.182863 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.182868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-02 00:54:33.182872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-02 00:54:33.182877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-02 00:54:33.182885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-02 00:54:33.182893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-02 00:54:33.182898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-02 00:54:33.182903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-02 00:54:33.182908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-02 00:54:33.182912 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.182919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-02 00:54:33.182924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-02 00:54:33.182929 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.182934 | orchestrator | 2026-03-02 00:54:33.182938 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-02 00:54:33.182943 | orchestrator | Monday 02 March 2026 00:50:54 +0000 (0:00:00.992) 0:02:29.940 ********** 2026-03-02 00:54:33.182947 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.182952 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.182956 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.182961 | orchestrator | 2026-03-02 00:54:33.182965 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-02 00:54:33.182970 | orchestrator | Monday 02 March 2026 00:50:55 +0000 (0:00:01.312) 0:02:31.253 ********** 2026-03-02 00:54:33.182974 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.182979 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.182983 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.182988 | orchestrator | 2026-03-02 00:54:33.182993 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-02 00:54:33.183013 | orchestrator | Monday 02 March 2026 00:50:57 +0000 (0:00:02.151) 0:02:33.405 ********** 2026-03-02 00:54:33.183022 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.183029 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.183036 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.183042 | orchestrator | 2026-03-02 00:54:33.183049 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-02 00:54:33.183056 | orchestrator | Monday 02 March 2026 00:50:58 +0000 (0:00:00.351) 0:02:33.756 ********** 2026-03-02 00:54:33.183062 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.183069 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.183075 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.183087 | orchestrator | 2026-03-02 00:54:33.183094 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-02 00:54:33.183102 | orchestrator | Monday 02 March 2026 00:50:58 +0000 (0:00:00.538) 0:02:34.294 ********** 2026-03-02 00:54:33.183110 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.183118 | orchestrator | 2026-03-02 00:54:33.183126 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-02 00:54:33.183133 | orchestrator | Monday 02 March 2026 00:50:59 +0000 (0:00:00.914) 0:02:35.209 ********** 2026-03-02 00:54:33.183146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 00:54:33.183153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 00:54:33.183158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 00:54:33.183169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 00:54:33.183174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 00:54:33.183183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 00:54:33.183192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 00:54:33.183197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 00:54:33.183206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 00:54:33.183211 | orchestrator | 2026-03-02 00:54:33.183218 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-02 00:54:33.183226 | orchestrator | Monday 02 March 2026 00:51:04 +0000 (0:00:04.529) 0:02:39.739 ********** 2026-03-02 00:54:33.183233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 00:54:33.183246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 00:54:33.183257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 00:54:33.183266 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.183272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 00:54:33.183281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 00:54:33.183286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 00:54:33.183294 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.183299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 00:54:33.183307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 00:54:33.183312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 00:54:33.183317 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.183321 | orchestrator | 2026-03-02 00:54:33.183326 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-02 00:54:33.183331 | orchestrator | Monday 02 March 2026 00:51:05 +0000 (0:00:00.898) 0:02:40.638 ********** 2026-03-02 00:54:33.183335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-02 00:54:33.183341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-02 00:54:33.183346 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.183353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-02 00:54:33.183359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-02 00:54:33.183369 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.183374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-02 00:54:33.183378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-02 00:54:33.183383 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.183387 | orchestrator | 2026-03-02 00:54:33.183392 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-02 00:54:33.183396 | orchestrator | Monday 02 March 2026 00:51:05 +0000 (0:00:00.819) 0:02:41.458 ********** 2026-03-02 00:54:33.183401 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.183405 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.183410 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.183415 | orchestrator | 2026-03-02 00:54:33.183419 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-02 00:54:33.183424 | orchestrator | Monday 02 March 2026 00:51:07 +0000 (0:00:01.309) 0:02:42.767 ********** 2026-03-02 00:54:33.183428 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.183433 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.183437 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.183442 | orchestrator | 2026-03-02 00:54:33.183446 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-02 00:54:33.183451 | orchestrator | Monday 02 March 2026 00:51:09 +0000 (0:00:02.064) 0:02:44.832 ********** 2026-03-02 00:54:33.183455 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.183460 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.183465 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.183469 | orchestrator | 2026-03-02 00:54:33.183474 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-02 00:54:33.183479 | orchestrator | Monday 02 March 2026 00:51:09 +0000 (0:00:00.445) 0:02:45.277 ********** 2026-03-02 00:54:33.183484 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.183489 | orchestrator | 2026-03-02 00:54:33.183494 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-02 00:54:33.183498 | orchestrator | Monday 02 March 2026 00:51:10 +0000 (0:00:00.949) 0:02:46.227 ********** 2026-03-02 00:54:33.183507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 00:54:33.183513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 00:54:33.183537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 00:54:33.183550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183556 | orchestrator | 2026-03-02 00:54:33.183562 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-02 00:54:33.183567 | orchestrator | Monday 02 March 2026 00:51:13 +0000 (0:00:03.125) 0:02:49.352 ********** 2026-03-02 00:54:33.183577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-02 00:54:33.183586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183594 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.183603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-02 00:54:33.183611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183618 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.183630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-02 00:54:33.183644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183651 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.183658 | orchestrator | 2026-03-02 00:54:33.183669 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-02 00:54:33.183676 | orchestrator | Monday 02 March 2026 00:51:14 +0000 (0:00:00.826) 0:02:50.179 ********** 2026-03-02 00:54:33.183684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-02 00:54:33.183692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-02 00:54:33.183700 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.183706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-02 00:54:33.183713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-02 00:54:33.183721 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.183728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-02 00:54:33.183736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-02 00:54:33.183744 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.183752 | orchestrator | 2026-03-02 00:54:33.183760 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-02 00:54:33.183767 | orchestrator | Monday 02 March 2026 00:51:15 +0000 (0:00:00.940) 0:02:51.120 ********** 2026-03-02 00:54:33.183772 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.183777 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.183781 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.183786 | orchestrator | 2026-03-02 00:54:33.183791 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-02 00:54:33.183806 | orchestrator | Monday 02 March 2026 00:51:16 +0000 (0:00:01.345) 0:02:52.465 ********** 2026-03-02 00:54:33.183811 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.183816 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.183820 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.183825 | orchestrator | 2026-03-02 00:54:33.183830 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-02 00:54:33.183834 | orchestrator | Monday 02 March 2026 00:51:19 +0000 (0:00:02.345) 0:02:54.811 ********** 2026-03-02 00:54:33.183844 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.183848 | orchestrator | 2026-03-02 00:54:33.183853 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-02 00:54:33.183858 | orchestrator | Monday 02 March 2026 00:51:20 +0000 (0:00:01.421) 0:02:56.232 ********** 2026-03-02 00:54:33.183871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-02 00:54:33.183880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-02 00:54:33.183936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-02 00:54:33.183978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.183992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.184035 | orchestrator | 2026-03-02 00:54:33.184042 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-02 00:54:33.184047 | orchestrator | Monday 02 March 2026 00:51:24 +0000 (0:00:03.691) 0:02:59.923 ********** 2026-03-02 00:54:33.184056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-02 00:54:33.184062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.184071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-02 00:54:33.184076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.184081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.184093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.184101 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.184106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.184112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.184116 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-02 00:54:33.184131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.184136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.184147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.184153 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.184158 | orchestrator | 2026-03-02 00:54:33.184168 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-02 00:54:33.184175 | orchestrator | Monday 02 March 2026 00:51:25 +0000 (0:00:00.733) 0:03:00.657 ********** 2026-03-02 00:54:33.184182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-02 00:54:33.184189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-02 00:54:33.184197 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-02 00:54:33.184214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-02 00:54:33.184221 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.184229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-02 00:54:33.184237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-02 00:54:33.184245 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.184253 | orchestrator | 2026-03-02 00:54:33.184261 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-02 00:54:33.184270 | orchestrator | Monday 02 March 2026 00:51:26 +0000 (0:00:01.040) 0:03:01.698 ********** 2026-03-02 00:54:33.184276 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.184284 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.184289 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.184294 | orchestrator | 2026-03-02 00:54:33.184299 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-02 00:54:33.184304 | orchestrator | Monday 02 March 2026 00:51:27 +0000 (0:00:01.338) 0:03:03.036 ********** 2026-03-02 00:54:33.184308 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.184313 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.184318 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.184323 | orchestrator | 2026-03-02 00:54:33.184328 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-02 00:54:33.184337 | orchestrator | Monday 02 March 2026 00:51:29 +0000 (0:00:01.788) 0:03:04.825 ********** 2026-03-02 00:54:33.184342 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.184347 | orchestrator | 2026-03-02 00:54:33.184351 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-02 00:54:33.184356 | orchestrator | Monday 02 March 2026 00:51:30 +0000 (0:00:01.157) 0:03:05.982 ********** 2026-03-02 00:54:33.184362 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-02 00:54:33.184367 | orchestrator | 2026-03-02 00:54:33.184372 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-02 00:54:33.184376 | orchestrator | Monday 02 March 2026 00:51:33 +0000 (0:00:03.100) 0:03:09.082 ********** 2026-03-02 00:54:33.184386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:54:33.184393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-02 00:54:33.184398 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:54:33.184417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-02 00:54:33.184422 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.184430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:54:33.184439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-02 00:54:33.184448 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.184453 | orchestrator | 2026-03-02 00:54:33.184458 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-02 00:54:33.184463 | orchestrator | Monday 02 March 2026 00:51:35 +0000 (0:00:02.175) 0:03:11.258 ********** 2026-03-02 00:54:33.184468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:54:33.184492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:54:33.184507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-02 00:54:33.184512 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-02 00:54:33.184523 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.184531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:54:33.184537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-02 00:54:33.184542 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.184551 | orchestrator | 2026-03-02 00:54:33.184556 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-02 00:54:33.184560 | orchestrator | Monday 02 March 2026 00:51:38 +0000 (0:00:02.354) 0:03:13.613 ********** 2026-03-02 00:54:33.184569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-02 00:54:33.184575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-02 00:54:33.184580 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-02 00:54:33.184590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-02 00:54:33.184595 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.184603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-02 00:54:33.184608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-02 00:54:33.184619 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.184624 | orchestrator | 2026-03-02 00:54:33.184629 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-02 00:54:33.184635 | orchestrator | Monday 02 March 2026 00:51:40 +0000 (0:00:02.424) 0:03:16.037 ********** 2026-03-02 00:54:33.184640 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.184645 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.184651 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.184655 | orchestrator | 2026-03-02 00:54:33.184661 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-02 00:54:33.184666 | orchestrator | Monday 02 March 2026 00:51:42 +0000 (0:00:01.778) 0:03:17.815 ********** 2026-03-02 00:54:33.184672 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184677 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.184682 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.184686 | orchestrator | 2026-03-02 00:54:33.184691 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-02 00:54:33.184696 | orchestrator | Monday 02 March 2026 00:51:43 +0000 (0:00:01.236) 0:03:19.052 ********** 2026-03-02 00:54:33.184704 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184709 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.184714 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.184720 | orchestrator | 2026-03-02 00:54:33.184725 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-02 00:54:33.184731 | orchestrator | Monday 02 March 2026 00:51:43 +0000 (0:00:00.275) 0:03:19.327 ********** 2026-03-02 00:54:33.184735 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.184740 | orchestrator | 2026-03-02 00:54:33.184746 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-02 00:54:33.184751 | orchestrator | Monday 02 March 2026 00:51:45 +0000 (0:00:01.202) 0:03:20.530 ********** 2026-03-02 00:54:33.184756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-02 00:54:33.184763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-02 00:54:33.184771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-02 00:54:33.184781 | orchestrator | 2026-03-02 00:54:33.184785 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-02 00:54:33.184790 | orchestrator | Monday 02 March 2026 00:51:46 +0000 (0:00:01.548) 0:03:22.079 ********** 2026-03-02 00:54:33.184795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-02 00:54:33.184824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-02 00:54:33.184830 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184835 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.184840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-02 00:54:33.184845 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.184850 | orchestrator | 2026-03-02 00:54:33.184855 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-02 00:54:33.184859 | orchestrator | Monday 02 March 2026 00:51:46 +0000 (0:00:00.350) 0:03:22.429 ********** 2026-03-02 00:54:33.184865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-02 00:54:33.184870 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-02 00:54:33.184880 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.184891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-02 00:54:33.184897 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.184911 | orchestrator | 2026-03-02 00:54:33.184916 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-02 00:54:33.184924 | orchestrator | Monday 02 March 2026 00:51:47 +0000 (0:00:00.720) 0:03:23.150 ********** 2026-03-02 00:54:33.184929 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184934 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.184939 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.184944 | orchestrator | 2026-03-02 00:54:33.184949 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-02 00:54:33.184954 | orchestrator | Monday 02 March 2026 00:51:48 +0000 (0:00:00.403) 0:03:23.554 ********** 2026-03-02 00:54:33.184959 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184963 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.184968 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.184973 | orchestrator | 2026-03-02 00:54:33.184978 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-02 00:54:33.184983 | orchestrator | Monday 02 March 2026 00:51:49 +0000 (0:00:01.075) 0:03:24.629 ********** 2026-03-02 00:54:33.184988 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.184992 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.185012 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.185018 | orchestrator | 2026-03-02 00:54:33.185023 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-02 00:54:33.185028 | orchestrator | Monday 02 March 2026 00:51:49 +0000 (0:00:00.283) 0:03:24.913 ********** 2026-03-02 00:54:33.185033 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.185037 | orchestrator | 2026-03-02 00:54:33.185042 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-02 00:54:33.185047 | orchestrator | Monday 02 March 2026 00:51:50 +0000 (0:00:01.252) 0:03:26.165 ********** 2026-03-02 00:54:33.185062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 00:54:33.185068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-02 00:54:33.185098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 00:54:33.185113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-02 00:54:33.185202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.185261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 00:54:33.185297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.185373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-02 00:54:33.185381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.185458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185463 | orchestrator | 2026-03-02 00:54:33.185468 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-02 00:54:33.185473 | orchestrator | Monday 02 March 2026 00:51:54 +0000 (0:00:03.850) 0:03:30.016 ********** 2026-03-02 00:54:33.185481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 00:54:33.185491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 00:54:33.185510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-02 00:54:33.185534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-02 00:54:33.185571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.185673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185696 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.185704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.185709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185714 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.185720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 00:54:33.185725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-02 00:54:33.185758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '52026-03-02 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:33.185934 | orchestrator | ', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-02 00:54:33.185958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.185969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-02 00:54:33.185981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-02 00:54:33.185986 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.185991 | orchestrator | 2026-03-02 00:54:33.185996 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-02 00:54:33.186097 | orchestrator | Monday 02 March 2026 00:51:56 +0000 (0:00:01.526) 0:03:31.542 ********** 2026-03-02 00:54:33.186103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-02 00:54:33.186109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-02 00:54:33.186115 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.186126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-02 00:54:33.186131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-02 00:54:33.186136 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.186141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-02 00:54:33.186146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-02 00:54:33.186151 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.186156 | orchestrator | 2026-03-02 00:54:33.186160 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-02 00:54:33.186166 | orchestrator | Monday 02 March 2026 00:51:57 +0000 (0:00:01.736) 0:03:33.279 ********** 2026-03-02 00:54:33.186170 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.186175 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.186180 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.186185 | orchestrator | 2026-03-02 00:54:33.186190 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-02 00:54:33.186195 | orchestrator | Monday 02 March 2026 00:51:59 +0000 (0:00:01.413) 0:03:34.693 ********** 2026-03-02 00:54:33.186200 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.186205 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.186210 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.186215 | orchestrator | 2026-03-02 00:54:33.186220 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-02 00:54:33.186231 | orchestrator | Monday 02 March 2026 00:52:01 +0000 (0:00:02.047) 0:03:36.740 ********** 2026-03-02 00:54:33.186236 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.186240 | orchestrator | 2026-03-02 00:54:33.186245 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-02 00:54:33.186250 | orchestrator | Monday 02 March 2026 00:52:02 +0000 (0:00:01.140) 0:03:37.881 ********** 2026-03-02 00:54:33.186259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.186267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.186276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.186281 | orchestrator | 2026-03-02 00:54:33.186286 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-02 00:54:33.186291 | orchestrator | Monday 02 March 2026 00:52:05 +0000 (0:00:03.451) 0:03:41.333 ********** 2026-03-02 00:54:33.186297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.186306 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.186314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.186320 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.186325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.186330 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.186335 | orchestrator | 2026-03-02 00:54:33.186340 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-02 00:54:33.186346 | orchestrator | Monday 02 March 2026 00:52:06 +0000 (0:00:00.436) 0:03:41.769 ********** 2026-03-02 00:54:33.186351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186362 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.186370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186380 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.186385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186399 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.186404 | orchestrator | 2026-03-02 00:54:33.186409 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-02 00:54:33.186414 | orchestrator | Monday 02 March 2026 00:52:06 +0000 (0:00:00.629) 0:03:42.399 ********** 2026-03-02 00:54:33.186422 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.186429 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.186440 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.186451 | orchestrator | 2026-03-02 00:54:33.186460 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-02 00:54:33.186469 | orchestrator | Monday 02 March 2026 00:52:08 +0000 (0:00:01.756) 0:03:44.156 ********** 2026-03-02 00:54:33.186477 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.186486 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.186493 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.186500 | orchestrator | 2026-03-02 00:54:33.186509 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-02 00:54:33.186516 | orchestrator | Monday 02 March 2026 00:52:10 +0000 (0:00:01.770) 0:03:45.927 ********** 2026-03-02 00:54:33.186525 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.186533 | orchestrator | 2026-03-02 00:54:33.186541 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-02 00:54:33.186549 | orchestrator | Monday 02 March 2026 00:52:11 +0000 (0:00:01.446) 0:03:47.373 ********** 2026-03-02 00:54:33.186564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.186576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.186620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.186644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186680 | orchestrator | 2026-03-02 00:54:33.186688 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-02 00:54:33.186697 | orchestrator | Monday 02 March 2026 00:52:15 +0000 (0:00:03.861) 0:03:51.234 ********** 2026-03-02 00:54:33.186706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.186719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186737 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.186752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.186770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186787 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.186801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.186811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.186841 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.186849 | orchestrator | 2026-03-02 00:54:33.186858 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-02 00:54:33.186866 | orchestrator | Monday 02 March 2026 00:52:16 +0000 (0:00:01.069) 0:03:52.304 ********** 2026-03-02 00:54:33.186875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186913 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.186922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-02 00:54:33.186957 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.186965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-02 00:54:33.187063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-02 00:54:33.187090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-02 00:54:33.187098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-02 00:54:33.187115 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.187122 | orchestrator | 2026-03-02 00:54:33.187130 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-02 00:54:33.187139 | orchestrator | Monday 02 March 2026 00:52:17 +0000 (0:00:00.868) 0:03:53.172 ********** 2026-03-02 00:54:33.187147 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.187155 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.187162 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.187170 | orchestrator | 2026-03-02 00:54:33.187177 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-02 00:54:33.187185 | orchestrator | Monday 02 March 2026 00:52:19 +0000 (0:00:01.483) 0:03:54.656 ********** 2026-03-02 00:54:33.187193 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.187202 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.187210 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.187218 | orchestrator | 2026-03-02 00:54:33.187226 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-02 00:54:33.187234 | orchestrator | Monday 02 March 2026 00:52:21 +0000 (0:00:02.210) 0:03:56.866 ********** 2026-03-02 00:54:33.187242 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.187250 | orchestrator | 2026-03-02 00:54:33.187259 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-02 00:54:33.187278 | orchestrator | Monday 02 March 2026 00:52:22 +0000 (0:00:01.407) 0:03:58.274 ********** 2026-03-02 00:54:33.187288 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-02 00:54:33.187298 | orchestrator | 2026-03-02 00:54:33.187306 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-02 00:54:33.187313 | orchestrator | Monday 02 March 2026 00:52:23 +0000 (0:00:00.790) 0:03:59.065 ********** 2026-03-02 00:54:33.187324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-02 00:54:33.187334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-02 00:54:33.187343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-02 00:54:33.187350 | orchestrator | 2026-03-02 00:54:33.187359 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-02 00:54:33.187367 | orchestrator | Monday 02 March 2026 00:52:27 +0000 (0:00:04.361) 0:04:03.427 ********** 2026-03-02 00:54:33.187388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-02 00:54:33.187396 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.187404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-02 00:54:33.187412 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.187421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-02 00:54:33.187429 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.187437 | orchestrator | 2026-03-02 00:54:33.187445 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-02 00:54:33.187452 | orchestrator | Monday 02 March 2026 00:52:28 +0000 (0:00:01.061) 0:04:04.488 ********** 2026-03-02 00:54:33.187467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-02 00:54:33.187476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-02 00:54:33.187485 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.187493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-02 00:54:33.187501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-02 00:54:33.187509 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.187517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-02 00:54:33.187526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-02 00:54:33.187533 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.187542 | orchestrator | 2026-03-02 00:54:33.187549 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-02 00:54:33.187557 | orchestrator | Monday 02 March 2026 00:52:30 +0000 (0:00:01.450) 0:04:05.938 ********** 2026-03-02 00:54:33.187573 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.187581 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.187590 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.187598 | orchestrator | 2026-03-02 00:54:33.187606 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-02 00:54:33.187615 | orchestrator | Monday 02 March 2026 00:52:32 +0000 (0:00:02.422) 0:04:08.361 ********** 2026-03-02 00:54:33.187623 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.187631 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.187639 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.187647 | orchestrator | 2026-03-02 00:54:33.187655 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-02 00:54:33.187664 | orchestrator | Monday 02 March 2026 00:52:35 +0000 (0:00:02.971) 0:04:11.332 ********** 2026-03-02 00:54:33.187674 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-02 00:54:33.187683 | orchestrator | 2026-03-02 00:54:33.187691 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-02 00:54:33.187704 | orchestrator | Monday 02 March 2026 00:52:37 +0000 (0:00:01.653) 0:04:12.986 ********** 2026-03-02 00:54:33.187714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-02 00:54:33.187723 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.187732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-02 00:54:33.187740 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.187757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-02 00:54:33.187766 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.187775 | orchestrator | 2026-03-02 00:54:33.187784 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-02 00:54:33.187792 | orchestrator | Monday 02 March 2026 00:52:38 +0000 (0:00:01.528) 0:04:14.514 ********** 2026-03-02 00:54:33.187801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-02 00:54:33.187815 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.187821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-02 00:54:33.187827 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.187832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-02 00:54:33.187837 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.187842 | orchestrator | 2026-03-02 00:54:33.187847 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-02 00:54:33.187852 | orchestrator | Monday 02 March 2026 00:52:40 +0000 (0:00:01.519) 0:04:16.034 ********** 2026-03-02 00:54:33.187857 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.187862 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.187867 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.187872 | orchestrator | 2026-03-02 00:54:33.187877 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-02 00:54:33.187885 | orchestrator | Monday 02 March 2026 00:52:42 +0000 (0:00:02.136) 0:04:18.170 ********** 2026-03-02 00:54:33.187891 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.187896 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.187901 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.187906 | orchestrator | 2026-03-02 00:54:33.187911 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-02 00:54:33.187916 | orchestrator | Monday 02 March 2026 00:52:45 +0000 (0:00:02.913) 0:04:21.084 ********** 2026-03-02 00:54:33.187921 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.187926 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.187930 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.187935 | orchestrator | 2026-03-02 00:54:33.187940 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-02 00:54:33.187945 | orchestrator | Monday 02 March 2026 00:52:48 +0000 (0:00:03.261) 0:04:24.345 ********** 2026-03-02 00:54:33.187950 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-02 00:54:33.187955 | orchestrator | 2026-03-02 00:54:33.187961 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-02 00:54:33.187965 | orchestrator | Monday 02 March 2026 00:52:49 +0000 (0:00:00.899) 0:04:25.245 ********** 2026-03-02 00:54:33.187971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-02 00:54:33.187976 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.187985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-02 00:54:33.187995 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.188033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-02 00:54:33.188038 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.188044 | orchestrator | 2026-03-02 00:54:33.188049 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-02 00:54:33.188054 | orchestrator | Monday 02 March 2026 00:52:51 +0000 (0:00:01.400) 0:04:26.646 ********** 2026-03-02 00:54:33.188059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-02 00:54:33.188064 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.188069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-02 00:54:33.188074 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.188083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-02 00:54:33.188088 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.188093 | orchestrator | 2026-03-02 00:54:33.188098 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-02 00:54:33.188103 | orchestrator | Monday 02 March 2026 00:52:52 +0000 (0:00:01.569) 0:04:28.216 ********** 2026-03-02 00:54:33.188108 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.188113 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.188118 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.188123 | orchestrator | 2026-03-02 00:54:33.188128 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-02 00:54:33.188133 | orchestrator | Monday 02 March 2026 00:52:54 +0000 (0:00:01.821) 0:04:30.038 ********** 2026-03-02 00:54:33.188138 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.188147 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.188153 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.188158 | orchestrator | 2026-03-02 00:54:33.188163 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-02 00:54:33.188168 | orchestrator | Monday 02 March 2026 00:52:56 +0000 (0:00:02.353) 0:04:32.391 ********** 2026-03-02 00:54:33.188174 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.188178 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.188183 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.188188 | orchestrator | 2026-03-02 00:54:33.188193 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-02 00:54:33.188198 | orchestrator | Monday 02 March 2026 00:52:59 +0000 (0:00:03.075) 0:04:35.466 ********** 2026-03-02 00:54:33.188203 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.188208 | orchestrator | 2026-03-02 00:54:33.188213 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-02 00:54:33.188218 | orchestrator | Monday 02 March 2026 00:53:01 +0000 (0:00:01.771) 0:04:37.238 ********** 2026-03-02 00:54:33.188229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.188236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.188242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 00:54:33.188253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 00:54:33.188265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.188286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.188315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 00:54:33.188320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.188330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.188346 | orchestrator | 2026-03-02 00:54:33.188351 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-02 00:54:33.188356 | orchestrator | Monday 02 March 2026 00:53:05 +0000 (0:00:03.432) 0:04:40.671 ********** 2026-03-02 00:54:33.188364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.188379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 00:54:33.188384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.188404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.188409 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.188415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 00:54:33.188427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.188528 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.188536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.188541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 00:54:33.188546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 00:54:33.188566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 00:54:33.188571 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.188576 | orchestrator | 2026-03-02 00:54:33.188581 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-02 00:54:33.188586 | orchestrator | Monday 02 March 2026 00:53:05 +0000 (0:00:00.752) 0:04:41.423 ********** 2026-03-02 00:54:33.188591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-02 00:54:33.188597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-02 00:54:33.188602 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.188618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-02 00:54:33.188624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-02 00:54:33.188630 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.188635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-02 00:54:33.188640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-02 00:54:33.188646 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.188650 | orchestrator | 2026-03-02 00:54:33.188655 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-02 00:54:33.188660 | orchestrator | Monday 02 March 2026 00:53:07 +0000 (0:00:01.571) 0:04:42.995 ********** 2026-03-02 00:54:33.188665 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.188670 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.188675 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.188680 | orchestrator | 2026-03-02 00:54:33.188685 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-02 00:54:33.188696 | orchestrator | Monday 02 March 2026 00:53:08 +0000 (0:00:01.251) 0:04:44.247 ********** 2026-03-02 00:54:33.188701 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.188706 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.188710 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.188715 | orchestrator | 2026-03-02 00:54:33.188720 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-02 00:54:33.188725 | orchestrator | Monday 02 March 2026 00:53:10 +0000 (0:00:01.933) 0:04:46.181 ********** 2026-03-02 00:54:33.188730 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.188734 | orchestrator | 2026-03-02 00:54:33.188739 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-02 00:54:33.188744 | orchestrator | Monday 02 March 2026 00:53:11 +0000 (0:00:01.309) 0:04:47.491 ********** 2026-03-02 00:54:33.188753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:54:33.188759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:54:33.188777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:54:33.188784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:54:33.188795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:54:33.188804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:54:33.188809 | orchestrator | 2026-03-02 00:54:33.188814 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-02 00:54:33.188819 | orchestrator | Monday 02 March 2026 00:53:18 +0000 (0:00:06.100) 0:04:53.592 ********** 2026-03-02 00:54:33.188835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-02 00:54:33.188841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-02 00:54:33.188851 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.188856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-02 00:54:33.188864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-02 00:54:33.188870 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.188887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-02 00:54:33.188893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-02 00:54:33.188903 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.188908 | orchestrator | 2026-03-02 00:54:33.188913 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-02 00:54:33.188918 | orchestrator | Monday 02 March 2026 00:53:18 +0000 (0:00:00.751) 0:04:54.343 ********** 2026-03-02 00:54:33.188923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-02 00:54:33.188929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-02 00:54:33.188934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-02 00:54:33.188941 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.188946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-02 00:54:33.188954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-02 00:54:33.188960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-02 00:54:33.188965 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.188970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-02 00:54:33.188975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-02 00:54:33.188980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-02 00:54:33.188984 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.188989 | orchestrator | 2026-03-02 00:54:33.188995 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-02 00:54:33.189026 | orchestrator | Monday 02 March 2026 00:53:19 +0000 (0:00:00.996) 0:04:55.340 ********** 2026-03-02 00:54:33.189031 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.189036 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.189041 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.189046 | orchestrator | 2026-03-02 00:54:33.189051 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-02 00:54:33.189063 | orchestrator | Monday 02 March 2026 00:53:20 +0000 (0:00:00.892) 0:04:56.232 ********** 2026-03-02 00:54:33.189068 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.189073 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.189077 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.189082 | orchestrator | 2026-03-02 00:54:33.189101 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-02 00:54:33.189108 | orchestrator | Monday 02 March 2026 00:53:22 +0000 (0:00:01.463) 0:04:57.696 ********** 2026-03-02 00:54:33.189112 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.189118 | orchestrator | 2026-03-02 00:54:33.189123 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-02 00:54:33.189128 | orchestrator | Monday 02 March 2026 00:53:23 +0000 (0:00:01.400) 0:04:59.096 ********** 2026-03-02 00:54:33.189134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-02 00:54:33.189140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-02 00:54:33.189150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 00:54:33.189156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 00:54:33.189163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-02 00:54:33.189222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 00:54:33.189233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-02 00:54:33.189262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-02 00:54:33.189268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-02 00:54:33.189299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-02 00:54:33.189304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-02 00:54:33.189336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-02 00:54:33.189345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189365 | orchestrator | 2026-03-02 00:54:33.189370 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-02 00:54:33.189379 | orchestrator | Monday 02 March 2026 00:53:27 +0000 (0:00:04.076) 0:05:03.172 ********** 2026-03-02 00:54:33.189384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-02 00:54:33.189389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 00:54:33.189398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-02 00:54:33.189427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-02 00:54:33.189432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189451 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.189457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-02 00:54:33.189462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 00:54:33.189471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-02 00:54:33.189500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-02 00:54:33.189506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-02 00:54:33.189525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 00:54:33.189536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189541 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.189549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-02 00:54:33.189583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-02 00:54:33.189588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 00:54:33.189601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 00:54:33.189606 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.189611 | orchestrator | 2026-03-02 00:54:33.189617 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-02 00:54:33.189622 | orchestrator | Monday 02 March 2026 00:53:28 +0000 (0:00:01.238) 0:05:04.411 ********** 2026-03-02 00:54:33.189627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-02 00:54:33.189633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-02 00:54:33.189643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-02 00:54:33.189649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-02 00:54:33.189654 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.189659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-02 00:54:33.189667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-02 00:54:33.189672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-02 00:54:33.189677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-02 00:54:33.189682 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.189687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-02 00:54:33.189692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-02 00:54:33.189698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-02 00:54:33.189706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-02 00:54:33.189711 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.189715 | orchestrator | 2026-03-02 00:54:33.189721 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-02 00:54:33.189725 | orchestrator | Monday 02 March 2026 00:53:29 +0000 (0:00:01.017) 0:05:05.428 ********** 2026-03-02 00:54:33.189730 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.189735 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.189740 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.189745 | orchestrator | 2026-03-02 00:54:33.189750 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-02 00:54:33.189754 | orchestrator | Monday 02 March 2026 00:53:30 +0000 (0:00:00.500) 0:05:05.928 ********** 2026-03-02 00:54:33.189759 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.189764 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.189769 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.189778 | orchestrator | 2026-03-02 00:54:33.189783 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-02 00:54:33.189788 | orchestrator | Monday 02 March 2026 00:53:31 +0000 (0:00:01.431) 0:05:07.360 ********** 2026-03-02 00:54:33.189793 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.189797 | orchestrator | 2026-03-02 00:54:33.189802 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-02 00:54:33.189807 | orchestrator | Monday 02 March 2026 00:53:33 +0000 (0:00:01.773) 0:05:09.133 ********** 2026-03-02 00:54:33.189812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:54:33.189821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:54:33.189827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-02 00:54:33.189833 | orchestrator | 2026-03-02 00:54:33.189838 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-02 00:54:33.189845 | orchestrator | Monday 02 March 2026 00:53:36 +0000 (0:00:02.432) 0:05:11.566 ********** 2026-03-02 00:54:33.189851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-02 00:54:33.189859 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.189865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-02 00:54:33.189870 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.189878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-02 00:54:33.189884 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.189889 | orchestrator | 2026-03-02 00:54:33.189894 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-02 00:54:33.189899 | orchestrator | Monday 02 March 2026 00:53:36 +0000 (0:00:00.419) 0:05:11.986 ********** 2026-03-02 00:54:33.189904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-02 00:54:33.189910 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.189915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-02 00:54:33.189920 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.189925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-02 00:54:33.189930 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.189935 | orchestrator | 2026-03-02 00:54:33.189944 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-02 00:54:33.189949 | orchestrator | Monday 02 March 2026 00:53:37 +0000 (0:00:01.067) 0:05:13.054 ********** 2026-03-02 00:54:33.189954 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.189962 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.189967 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.189972 | orchestrator | 2026-03-02 00:54:33.189977 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-02 00:54:33.189981 | orchestrator | Monday 02 March 2026 00:53:37 +0000 (0:00:00.447) 0:05:13.501 ********** 2026-03-02 00:54:33.189986 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.189991 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.189996 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.190071 | orchestrator | 2026-03-02 00:54:33.190077 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-02 00:54:33.190082 | orchestrator | Monday 02 March 2026 00:53:39 +0000 (0:00:01.391) 0:05:14.893 ********** 2026-03-02 00:54:33.190087 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:54:33.190092 | orchestrator | 2026-03-02 00:54:33.190097 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-02 00:54:33.190102 | orchestrator | Monday 02 March 2026 00:53:41 +0000 (0:00:01.856) 0:05:16.749 ********** 2026-03-02 00:54:33.190107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.190117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.190122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.190138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.190144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.190150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-02 00:54:33.190155 | orchestrator | 2026-03-02 00:54:33.190159 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-02 00:54:33.190164 | orchestrator | Monday 02 March 2026 00:53:46 +0000 (0:00:05.449) 0:05:22.199 ********** 2026-03-02 00:54:33.190173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.190185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.190190 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.190195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.190200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.190205 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.190215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.190225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-02 00:54:33.190230 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.190235 | orchestrator | 2026-03-02 00:54:33.190240 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-02 00:54:33.190248 | orchestrator | Monday 02 March 2026 00:53:47 +0000 (0:00:00.618) 0:05:22.817 ********** 2026-03-02 00:54:33.190253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190274 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.190279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190299 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.190303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-02 00:54:33.190330 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.190334 | orchestrator | 2026-03-02 00:54:33.190339 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-02 00:54:33.190343 | orchestrator | Monday 02 March 2026 00:53:48 +0000 (0:00:01.700) 0:05:24.518 ********** 2026-03-02 00:54:33.190348 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.190353 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.190357 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.190362 | orchestrator | 2026-03-02 00:54:33.190366 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-02 00:54:33.190371 | orchestrator | Monday 02 March 2026 00:53:50 +0000 (0:00:01.299) 0:05:25.817 ********** 2026-03-02 00:54:33.190376 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.190380 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.190385 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.190389 | orchestrator | 2026-03-02 00:54:33.190394 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-02 00:54:33.190399 | orchestrator | Monday 02 March 2026 00:53:52 +0000 (0:00:02.023) 0:05:27.841 ********** 2026-03-02 00:54:33.190403 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.190408 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.190412 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.190417 | orchestrator | 2026-03-02 00:54:33.190422 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-02 00:54:33.190426 | orchestrator | Monday 02 March 2026 00:53:52 +0000 (0:00:00.280) 0:05:28.122 ********** 2026-03-02 00:54:33.190431 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.190435 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.190440 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.190444 | orchestrator | 2026-03-02 00:54:33.190449 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-02 00:54:33.190456 | orchestrator | Monday 02 March 2026 00:53:52 +0000 (0:00:00.283) 0:05:28.406 ********** 2026-03-02 00:54:33.190461 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.190466 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.190470 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.190475 | orchestrator | 2026-03-02 00:54:33.190479 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-02 00:54:33.190484 | orchestrator | Monday 02 March 2026 00:53:53 +0000 (0:00:00.499) 0:05:28.905 ********** 2026-03-02 00:54:33.190489 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.190493 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.190498 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.190502 | orchestrator | 2026-03-02 00:54:33.190507 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-02 00:54:33.190511 | orchestrator | Monday 02 March 2026 00:53:53 +0000 (0:00:00.289) 0:05:29.194 ********** 2026-03-02 00:54:33.190516 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.190520 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.190525 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.190529 | orchestrator | 2026-03-02 00:54:33.190534 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-02 00:54:33.190539 | orchestrator | Monday 02 March 2026 00:53:53 +0000 (0:00:00.280) 0:05:29.475 ********** 2026-03-02 00:54:33.190543 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.190548 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.190552 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.190562 | orchestrator | 2026-03-02 00:54:33.190567 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-02 00:54:33.190571 | orchestrator | Monday 02 March 2026 00:53:54 +0000 (0:00:00.663) 0:05:30.139 ********** 2026-03-02 00:54:33.190576 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.190581 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.190585 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.190590 | orchestrator | 2026-03-02 00:54:33.190595 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-02 00:54:33.190599 | orchestrator | Monday 02 March 2026 00:53:55 +0000 (0:00:00.721) 0:05:30.860 ********** 2026-03-02 00:54:33.190604 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.190608 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.190613 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.190617 | orchestrator | 2026-03-02 00:54:33.190622 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-02 00:54:33.190626 | orchestrator | Monday 02 March 2026 00:53:55 +0000 (0:00:00.296) 0:05:31.156 ********** 2026-03-02 00:54:33.190631 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.190635 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.190640 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.190645 | orchestrator | 2026-03-02 00:54:33.190649 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-02 00:54:33.190654 | orchestrator | Monday 02 March 2026 00:53:56 +0000 (0:00:00.924) 0:05:32.081 ********** 2026-03-02 00:54:33.190658 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.190663 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.190668 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.190672 | orchestrator | 2026-03-02 00:54:33.190677 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-02 00:54:33.190682 | orchestrator | Monday 02 March 2026 00:53:57 +0000 (0:00:01.066) 0:05:33.148 ********** 2026-03-02 00:54:33.190686 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.190691 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.190695 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.190700 | orchestrator | 2026-03-02 00:54:33.190704 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-02 00:54:33.190709 | orchestrator | Monday 02 March 2026 00:53:58 +0000 (0:00:00.913) 0:05:34.061 ********** 2026-03-02 00:54:33.190714 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.190719 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.190723 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.190728 | orchestrator | 2026-03-02 00:54:33.190733 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-02 00:54:33.190738 | orchestrator | Monday 02 March 2026 00:54:02 +0000 (0:00:04.035) 0:05:38.096 ********** 2026-03-02 00:54:33.190742 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.190747 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.190805 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.190824 | orchestrator | 2026-03-02 00:54:33.190829 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-02 00:54:33.190833 | orchestrator | Monday 02 March 2026 00:54:05 +0000 (0:00:02.597) 0:05:40.694 ********** 2026-03-02 00:54:33.190838 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.190842 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.190847 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.190851 | orchestrator | 2026-03-02 00:54:33.190856 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-02 00:54:33.190861 | orchestrator | Monday 02 March 2026 00:54:14 +0000 (0:00:09.370) 0:05:50.065 ********** 2026-03-02 00:54:33.190865 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.190870 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.190874 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.190879 | orchestrator | 2026-03-02 00:54:33.190883 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-02 00:54:33.190888 | orchestrator | Monday 02 March 2026 00:54:18 +0000 (0:00:04.082) 0:05:54.148 ********** 2026-03-02 00:54:33.190898 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:54:33.190902 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:54:33.190907 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:54:33.190911 | orchestrator | 2026-03-02 00:54:33.190916 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-02 00:54:33.190920 | orchestrator | Monday 02 March 2026 00:54:27 +0000 (0:00:08.940) 0:06:03.088 ********** 2026-03-02 00:54:33.190942 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.190946 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.190951 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.190955 | orchestrator | 2026-03-02 00:54:33.190960 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-02 00:54:33.190964 | orchestrator | Monday 02 March 2026 00:54:27 +0000 (0:00:00.304) 0:06:03.392 ********** 2026-03-02 00:54:33.190969 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.190978 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.190982 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.190987 | orchestrator | 2026-03-02 00:54:33.190991 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-02 00:54:33.190996 | orchestrator | Monday 02 March 2026 00:54:28 +0000 (0:00:00.374) 0:06:03.767 ********** 2026-03-02 00:54:33.191042 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.191050 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.191057 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.191065 | orchestrator | 2026-03-02 00:54:33.191072 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-02 00:54:33.191080 | orchestrator | Monday 02 March 2026 00:54:28 +0000 (0:00:00.641) 0:06:04.408 ********** 2026-03-02 00:54:33.191087 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.191094 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.191101 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.191106 | orchestrator | 2026-03-02 00:54:33.191110 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-02 00:54:33.191115 | orchestrator | Monday 02 March 2026 00:54:29 +0000 (0:00:00.327) 0:06:04.735 ********** 2026-03-02 00:54:33.191120 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.191125 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.191129 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.191134 | orchestrator | 2026-03-02 00:54:33.191139 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-02 00:54:33.191143 | orchestrator | Monday 02 March 2026 00:54:29 +0000 (0:00:00.307) 0:06:05.043 ********** 2026-03-02 00:54:33.191148 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:54:33.191152 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:54:33.191157 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:54:33.191164 | orchestrator | 2026-03-02 00:54:33.191172 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-02 00:54:33.191178 | orchestrator | Monday 02 March 2026 00:54:29 +0000 (0:00:00.317) 0:06:05.360 ********** 2026-03-02 00:54:33.191192 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.191206 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.191213 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.191220 | orchestrator | 2026-03-02 00:54:33.191226 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-02 00:54:33.191232 | orchestrator | Monday 02 March 2026 00:54:31 +0000 (0:00:01.218) 0:06:06.579 ********** 2026-03-02 00:54:33.191241 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:54:33.191248 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:54:33.191255 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:54:33.191262 | orchestrator | 2026-03-02 00:54:33.191269 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:54:33.191276 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-02 00:54:33.191293 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-02 00:54:33.191301 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-02 00:54:33.191307 | orchestrator | 2026-03-02 00:54:33.191314 | orchestrator | 2026-03-02 00:54:33.191321 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:54:33.191333 | orchestrator | Monday 02 March 2026 00:54:31 +0000 (0:00:00.841) 0:06:07.421 ********** 2026-03-02 00:54:33.191339 | orchestrator | =============================================================================== 2026-03-02 00:54:33.191345 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.37s 2026-03-02 00:54:33.191352 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.94s 2026-03-02 00:54:33.191358 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.10s 2026-03-02 00:54:33.191364 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.61s 2026-03-02 00:54:33.191370 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.45s 2026-03-02 00:54:33.191376 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.86s 2026-03-02 00:54:33.191382 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.59s 2026-03-02 00:54:33.191388 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.53s 2026-03-02 00:54:33.191394 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.52s 2026-03-02 00:54:33.191400 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.48s 2026-03-02 00:54:33.191405 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.36s 2026-03-02 00:54:33.191411 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.08s 2026-03-02 00:54:33.191418 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.08s 2026-03-02 00:54:33.191424 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.04s 2026-03-02 00:54:33.191430 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.95s 2026-03-02 00:54:33.191436 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.86s 2026-03-02 00:54:33.191442 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.85s 2026-03-02 00:54:33.191448 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 3.73s 2026-03-02 00:54:33.191455 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.69s 2026-03-02 00:54:33.191462 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.67s 2026-03-02 00:54:36.213208 | orchestrator | 2026-03-02 00:54:36 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:36.216068 | orchestrator | 2026-03-02 00:54:36 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:54:36.217912 | orchestrator | 2026-03-02 00:54:36 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:54:36.218170 | orchestrator | 2026-03-02 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:39.260788 | orchestrator | 2026-03-02 00:54:39 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:39.260949 | orchestrator | 2026-03-02 00:54:39 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:54:39.260967 | orchestrator | 2026-03-02 00:54:39 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:54:39.261487 | orchestrator | 2026-03-02 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:42.308316 | orchestrator | 2026-03-02 00:54:42 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:42.308470 | orchestrator | 2026-03-02 00:54:42 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:54:42.308493 | orchestrator | 2026-03-02 00:54:42 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:54:42.308595 | orchestrator | 2026-03-02 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:45.343652 | orchestrator | 2026-03-02 00:54:45 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:45.344395 | orchestrator | 2026-03-02 00:54:45 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:54:45.345023 | orchestrator | 2026-03-02 00:54:45 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:54:45.345113 | orchestrator | 2026-03-02 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:48.393212 | orchestrator | 2026-03-02 00:54:48 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:48.393718 | orchestrator | 2026-03-02 00:54:48 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:54:48.394467 | orchestrator | 2026-03-02 00:54:48 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:54:48.394499 | orchestrator | 2026-03-02 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:51.433792 | orchestrator | 2026-03-02 00:54:51 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:51.434344 | orchestrator | 2026-03-02 00:54:51 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:54:51.435076 | orchestrator | 2026-03-02 00:54:51 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:54:51.435111 | orchestrator | 2026-03-02 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:54.527384 | orchestrator | 2026-03-02 00:54:54 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:54.527692 | orchestrator | 2026-03-02 00:54:54 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:54:54.528668 | orchestrator | 2026-03-02 00:54:54 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:54:54.528710 | orchestrator | 2026-03-02 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:54:57.563103 | orchestrator | 2026-03-02 00:54:57 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:54:57.563175 | orchestrator | 2026-03-02 00:54:57 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:54:57.563186 | orchestrator | 2026-03-02 00:54:57 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:54:57.563195 | orchestrator | 2026-03-02 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:00.587563 | orchestrator | 2026-03-02 00:55:00 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:00.588150 | orchestrator | 2026-03-02 00:55:00 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:00.589023 | orchestrator | 2026-03-02 00:55:00 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:00.589059 | orchestrator | 2026-03-02 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:03.623811 | orchestrator | 2026-03-02 00:55:03 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:03.625181 | orchestrator | 2026-03-02 00:55:03 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:03.628690 | orchestrator | 2026-03-02 00:55:03 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:03.629232 | orchestrator | 2026-03-02 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:06.684004 | orchestrator | 2026-03-02 00:55:06 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:06.684066 | orchestrator | 2026-03-02 00:55:06 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:06.684797 | orchestrator | 2026-03-02 00:55:06 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:06.684825 | orchestrator | 2026-03-02 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:09.732399 | orchestrator | 2026-03-02 00:55:09 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:09.733864 | orchestrator | 2026-03-02 00:55:09 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:09.735556 | orchestrator | 2026-03-02 00:55:09 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:09.735614 | orchestrator | 2026-03-02 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:12.794484 | orchestrator | 2026-03-02 00:55:12 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:12.797745 | orchestrator | 2026-03-02 00:55:12 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:12.800791 | orchestrator | 2026-03-02 00:55:12 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:12.802237 | orchestrator | 2026-03-02 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:15.845315 | orchestrator | 2026-03-02 00:55:15 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:15.846565 | orchestrator | 2026-03-02 00:55:15 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:15.848620 | orchestrator | 2026-03-02 00:55:15 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:15.848671 | orchestrator | 2026-03-02 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:18.899737 | orchestrator | 2026-03-02 00:55:18 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:18.901388 | orchestrator | 2026-03-02 00:55:18 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:18.903190 | orchestrator | 2026-03-02 00:55:18 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:18.903244 | orchestrator | 2026-03-02 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:21.945215 | orchestrator | 2026-03-02 00:55:21 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:21.945637 | orchestrator | 2026-03-02 00:55:21 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:21.948750 | orchestrator | 2026-03-02 00:55:21 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:21.948789 | orchestrator | 2026-03-02 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:24.998800 | orchestrator | 2026-03-02 00:55:24 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:25.001426 | orchestrator | 2026-03-02 00:55:25 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:25.003477 | orchestrator | 2026-03-02 00:55:25 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:25.003607 | orchestrator | 2026-03-02 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:28.044453 | orchestrator | 2026-03-02 00:55:28 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:28.045724 | orchestrator | 2026-03-02 00:55:28 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:28.047323 | orchestrator | 2026-03-02 00:55:28 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:28.047379 | orchestrator | 2026-03-02 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:31.094792 | orchestrator | 2026-03-02 00:55:31 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:31.097250 | orchestrator | 2026-03-02 00:55:31 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:31.099482 | orchestrator | 2026-03-02 00:55:31 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:31.099544 | orchestrator | 2026-03-02 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:34.143504 | orchestrator | 2026-03-02 00:55:34 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:34.144659 | orchestrator | 2026-03-02 00:55:34 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:34.146149 | orchestrator | 2026-03-02 00:55:34 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:34.147070 | orchestrator | 2026-03-02 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:37.191338 | orchestrator | 2026-03-02 00:55:37 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:37.193393 | orchestrator | 2026-03-02 00:55:37 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:37.194207 | orchestrator | 2026-03-02 00:55:37 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:37.194507 | orchestrator | 2026-03-02 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:40.227194 | orchestrator | 2026-03-02 00:55:40 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:40.228998 | orchestrator | 2026-03-02 00:55:40 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:40.230418 | orchestrator | 2026-03-02 00:55:40 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:40.230465 | orchestrator | 2026-03-02 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:43.266430 | orchestrator | 2026-03-02 00:55:43 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:43.268266 | orchestrator | 2026-03-02 00:55:43 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:43.269734 | orchestrator | 2026-03-02 00:55:43 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:43.269938 | orchestrator | 2026-03-02 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:46.304467 | orchestrator | 2026-03-02 00:55:46 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:46.304901 | orchestrator | 2026-03-02 00:55:46 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:46.305763 | orchestrator | 2026-03-02 00:55:46 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:46.305779 | orchestrator | 2026-03-02 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:49.356996 | orchestrator | 2026-03-02 00:55:49 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:49.359398 | orchestrator | 2026-03-02 00:55:49 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:49.362236 | orchestrator | 2026-03-02 00:55:49 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:49.362319 | orchestrator | 2026-03-02 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:52.404989 | orchestrator | 2026-03-02 00:55:52 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:52.407855 | orchestrator | 2026-03-02 00:55:52 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:52.408504 | orchestrator | 2026-03-02 00:55:52 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:52.408706 | orchestrator | 2026-03-02 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:55.453235 | orchestrator | 2026-03-02 00:55:55 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:55.456264 | orchestrator | 2026-03-02 00:55:55 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:55.458445 | orchestrator | 2026-03-02 00:55:55 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:55.458545 | orchestrator | 2026-03-02 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:55:58.505028 | orchestrator | 2026-03-02 00:55:58 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:55:58.506547 | orchestrator | 2026-03-02 00:55:58 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:55:58.509753 | orchestrator | 2026-03-02 00:55:58 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:55:58.510697 | orchestrator | 2026-03-02 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:01.551919 | orchestrator | 2026-03-02 00:56:01 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:01.552450 | orchestrator | 2026-03-02 00:56:01 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:01.554159 | orchestrator | 2026-03-02 00:56:01 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:01.554256 | orchestrator | 2026-03-02 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:04.596241 | orchestrator | 2026-03-02 00:56:04 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:04.598178 | orchestrator | 2026-03-02 00:56:04 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:04.599749 | orchestrator | 2026-03-02 00:56:04 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:04.599782 | orchestrator | 2026-03-02 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:07.645034 | orchestrator | 2026-03-02 00:56:07 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:07.646734 | orchestrator | 2026-03-02 00:56:07 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:07.647487 | orchestrator | 2026-03-02 00:56:07 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:07.647593 | orchestrator | 2026-03-02 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:10.697034 | orchestrator | 2026-03-02 00:56:10 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:10.699796 | orchestrator | 2026-03-02 00:56:10 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:10.701404 | orchestrator | 2026-03-02 00:56:10 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:10.701497 | orchestrator | 2026-03-02 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:13.746313 | orchestrator | 2026-03-02 00:56:13 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:13.748318 | orchestrator | 2026-03-02 00:56:13 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:13.751363 | orchestrator | 2026-03-02 00:56:13 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:13.751429 | orchestrator | 2026-03-02 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:16.799307 | orchestrator | 2026-03-02 00:56:16 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:16.800062 | orchestrator | 2026-03-02 00:56:16 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:16.802155 | orchestrator | 2026-03-02 00:56:16 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:16.802203 | orchestrator | 2026-03-02 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:19.847604 | orchestrator | 2026-03-02 00:56:19 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:19.849048 | orchestrator | 2026-03-02 00:56:19 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:19.851499 | orchestrator | 2026-03-02 00:56:19 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:19.851538 | orchestrator | 2026-03-02 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:22.898171 | orchestrator | 2026-03-02 00:56:22 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:22.900331 | orchestrator | 2026-03-02 00:56:22 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:22.902362 | orchestrator | 2026-03-02 00:56:22 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:22.902420 | orchestrator | 2026-03-02 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:25.962580 | orchestrator | 2026-03-02 00:56:25 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:25.966713 | orchestrator | 2026-03-02 00:56:25 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:25.968419 | orchestrator | 2026-03-02 00:56:25 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:25.968514 | orchestrator | 2026-03-02 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:29.011158 | orchestrator | 2026-03-02 00:56:29 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:29.012891 | orchestrator | 2026-03-02 00:56:29 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:29.014344 | orchestrator | 2026-03-02 00:56:29 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:29.014419 | orchestrator | 2026-03-02 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:32.070173 | orchestrator | 2026-03-02 00:56:32 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:32.071661 | orchestrator | 2026-03-02 00:56:32 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:32.074001 | orchestrator | 2026-03-02 00:56:32 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:32.074157 | orchestrator | 2026-03-02 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:35.126181 | orchestrator | 2026-03-02 00:56:35 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state STARTED 2026-03-02 00:56:35.128448 | orchestrator | 2026-03-02 00:56:35 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:35.131088 | orchestrator | 2026-03-02 00:56:35 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:35.131474 | orchestrator | 2026-03-02 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:38.169715 | orchestrator | 2026-03-02 00:56:38 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:56:38.174815 | orchestrator | 2026-03-02 00:56:38 | INFO  | Task d62bf8b3-52a9-45da-835a-21bede4a9501 is in state SUCCESS 2026-03-02 00:56:38.176978 | orchestrator | 2026-03-02 00:56:38.177023 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-02 00:56:38.177029 | orchestrator | 2.16.14 2026-03-02 00:56:38.177034 | orchestrator | 2026-03-02 00:56:38.177041 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-02 00:56:38.177056 | orchestrator | 2026-03-02 00:56:38.177064 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-02 00:56:38.177081 | orchestrator | Monday 02 March 2026 00:46:09 +0000 (0:00:00.624) 0:00:00.624 ********** 2026-03-02 00:56:38.177088 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.177094 | orchestrator | 2026-03-02 00:56:38.177101 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-02 00:56:38.177107 | orchestrator | Monday 02 March 2026 00:46:10 +0000 (0:00:01.036) 0:00:01.661 ********** 2026-03-02 00:56:38.177113 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.177120 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.177126 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.177134 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.177138 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.177142 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.177146 | orchestrator | 2026-03-02 00:56:38.177149 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-02 00:56:38.177153 | orchestrator | Monday 02 March 2026 00:46:12 +0000 (0:00:01.545) 0:00:03.206 ********** 2026-03-02 00:56:38.177157 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.177161 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.177165 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.177168 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.177172 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.177176 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.177180 | orchestrator | 2026-03-02 00:56:38.177183 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-02 00:56:38.177187 | orchestrator | Monday 02 March 2026 00:46:13 +0000 (0:00:00.745) 0:00:03.952 ********** 2026-03-02 00:56:38.177191 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.177195 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.177198 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.177214 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.177218 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.177222 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.177225 | orchestrator | 2026-03-02 00:56:38.177229 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-02 00:56:38.177233 | orchestrator | Monday 02 March 2026 00:46:14 +0000 (0:00:00.892) 0:00:04.844 ********** 2026-03-02 00:56:38.177237 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.177240 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.177244 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.177248 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.177251 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.177255 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.177259 | orchestrator | 2026-03-02 00:56:38.177262 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-02 00:56:38.177266 | orchestrator | Monday 02 March 2026 00:46:14 +0000 (0:00:00.640) 0:00:05.484 ********** 2026-03-02 00:56:38.177291 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.177295 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.177299 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.177303 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.177306 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.177337 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.177342 | orchestrator | 2026-03-02 00:56:38.177346 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-02 00:56:38.177350 | orchestrator | Monday 02 March 2026 00:46:15 +0000 (0:00:00.451) 0:00:05.936 ********** 2026-03-02 00:56:38.177353 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.177357 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.177361 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.177365 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.177369 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.177372 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.177376 | orchestrator | 2026-03-02 00:56:38.177380 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-02 00:56:38.177384 | orchestrator | Monday 02 March 2026 00:46:15 +0000 (0:00:00.708) 0:00:06.645 ********** 2026-03-02 00:56:38.177387 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.177392 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.177395 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.177399 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.177403 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.177406 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.177410 | orchestrator | 2026-03-02 00:56:38.177414 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-02 00:56:38.177418 | orchestrator | Monday 02 March 2026 00:46:16 +0000 (0:00:00.556) 0:00:07.201 ********** 2026-03-02 00:56:38.177422 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.177425 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.177429 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.177433 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.177436 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.177440 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.177444 | orchestrator | 2026-03-02 00:56:38.177448 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-02 00:56:38.177451 | orchestrator | Monday 02 March 2026 00:46:17 +0000 (0:00:00.754) 0:00:07.955 ********** 2026-03-02 00:56:38.177465 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-02 00:56:38.177469 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-02 00:56:38.177473 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-02 00:56:38.177477 | orchestrator | 2026-03-02 00:56:38.177481 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-02 00:56:38.177484 | orchestrator | Monday 02 March 2026 00:46:17 +0000 (0:00:00.709) 0:00:08.665 ********** 2026-03-02 00:56:38.177492 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.177495 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.177499 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.177511 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.177515 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.177519 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.177522 | orchestrator | 2026-03-02 00:56:38.177538 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-02 00:56:38.177542 | orchestrator | Monday 02 March 2026 00:46:19 +0000 (0:00:01.295) 0:00:09.961 ********** 2026-03-02 00:56:38.177549 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-02 00:56:38.177553 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-02 00:56:38.177556 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-02 00:56:38.177560 | orchestrator | 2026-03-02 00:56:38.177564 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-02 00:56:38.177568 | orchestrator | Monday 02 March 2026 00:46:21 +0000 (0:00:02.217) 0:00:12.178 ********** 2026-03-02 00:56:38.177572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-02 00:56:38.177575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-02 00:56:38.177579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-02 00:56:38.177583 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.177588 | orchestrator | 2026-03-02 00:56:38.177592 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-02 00:56:38.177596 | orchestrator | Monday 02 March 2026 00:46:21 +0000 (0:00:00.507) 0:00:12.685 ********** 2026-03-02 00:56:38.177602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.177607 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.177612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.177641 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.177646 | orchestrator | 2026-03-02 00:56:38.177650 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-02 00:56:38.177655 | orchestrator | Monday 02 March 2026 00:46:22 +0000 (0:00:00.629) 0:00:13.315 ********** 2026-03-02 00:56:38.177660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.177666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.177670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.177678 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.177682 | orchestrator | 2026-03-02 00:56:38.177687 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-02 00:56:38.177691 | orchestrator | Monday 02 March 2026 00:46:22 +0000 (0:00:00.357) 0:00:13.673 ********** 2026-03-02 00:56:38.177700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-02 00:46:19.753988', 'end': '2026-03-02 00:46:19.860931', 'delta': '0:00:00.106943', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.177709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-02 00:46:20.419516', 'end': '2026-03-02 00:46:20.520823', 'delta': '0:00:00.101307', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.177714 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-02 00:46:21.089797', 'end': '2026-03-02 00:46:21.198463', 'delta': '0:00:00.108666', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.177718 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.177722 | orchestrator | 2026-03-02 00:56:38.177727 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-02 00:56:38.177732 | orchestrator | Monday 02 March 2026 00:46:23 +0000 (0:00:00.285) 0:00:13.959 ********** 2026-03-02 00:56:38.177736 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.177740 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.177745 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.177749 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.177818 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.177822 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.177826 | orchestrator | 2026-03-02 00:56:38.177830 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-02 00:56:38.177833 | orchestrator | Monday 02 March 2026 00:46:24 +0000 (0:00:01.813) 0:00:15.772 ********** 2026-03-02 00:56:38.177837 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-02 00:56:38.177841 | orchestrator | 2026-03-02 00:56:38.177845 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-02 00:56:38.177849 | orchestrator | Monday 02 March 2026 00:46:25 +0000 (0:00:00.557) 0:00:16.329 ********** 2026-03-02 00:56:38.177856 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.177860 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.177863 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.177867 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.177871 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.177874 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.177878 | orchestrator | 2026-03-02 00:56:38.177894 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-02 00:56:38.177899 | orchestrator | Monday 02 March 2026 00:46:26 +0000 (0:00:00.814) 0:00:17.144 ********** 2026-03-02 00:56:38.177902 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.177906 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.177910 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.177913 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.177917 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.177921 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.177929 | orchestrator | 2026-03-02 00:56:38.177933 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-02 00:56:38.177937 | orchestrator | Monday 02 March 2026 00:46:27 +0000 (0:00:01.379) 0:00:18.523 ********** 2026-03-02 00:56:38.177941 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.177949 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.177953 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.177957 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.177960 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.177964 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.177968 | orchestrator | 2026-03-02 00:56:38.177971 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-02 00:56:38.177975 | orchestrator | Monday 02 March 2026 00:46:28 +0000 (0:00:00.807) 0:00:19.331 ********** 2026-03-02 00:56:38.177979 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.177983 | orchestrator | 2026-03-02 00:56:38.177986 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-02 00:56:38.177990 | orchestrator | Monday 02 March 2026 00:46:28 +0000 (0:00:00.149) 0:00:19.481 ********** 2026-03-02 00:56:38.177994 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.177997 | orchestrator | 2026-03-02 00:56:38.178001 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-02 00:56:38.178005 | orchestrator | Monday 02 March 2026 00:46:29 +0000 (0:00:00.416) 0:00:19.897 ********** 2026-03-02 00:56:38.178009 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.178041 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.178047 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.178058 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.178079 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.178085 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.178092 | orchestrator | 2026-03-02 00:56:38.178098 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-02 00:56:38.178104 | orchestrator | Monday 02 March 2026 00:46:30 +0000 (0:00:01.492) 0:00:21.390 ********** 2026-03-02 00:56:38.178109 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.178118 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.178125 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.178130 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.178136 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.178142 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.178148 | orchestrator | 2026-03-02 00:56:38.178154 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-02 00:56:38.178160 | orchestrator | Monday 02 March 2026 00:46:31 +0000 (0:00:00.883) 0:00:22.273 ********** 2026-03-02 00:56:38.178166 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.178173 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.178185 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.178192 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.178198 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.178205 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.178212 | orchestrator | 2026-03-02 00:56:38.178218 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-02 00:56:38.178224 | orchestrator | Monday 02 March 2026 00:46:32 +0000 (0:00:00.732) 0:00:23.006 ********** 2026-03-02 00:56:38.178230 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.178300 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.178324 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.178329 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.178333 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.178337 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.178340 | orchestrator | 2026-03-02 00:56:38.178344 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-02 00:56:38.178348 | orchestrator | Monday 02 March 2026 00:46:33 +0000 (0:00:01.380) 0:00:24.386 ********** 2026-03-02 00:56:38.178352 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.178355 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.178359 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.178363 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.178367 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.178370 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.178374 | orchestrator | 2026-03-02 00:56:38.178378 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-02 00:56:38.178381 | orchestrator | Monday 02 March 2026 00:46:34 +0000 (0:00:00.750) 0:00:25.136 ********** 2026-03-02 00:56:38.178385 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.178389 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.178393 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.178397 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.178400 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.178404 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.178408 | orchestrator | 2026-03-02 00:56:38.178411 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-02 00:56:38.178415 | orchestrator | Monday 02 March 2026 00:46:36 +0000 (0:00:01.660) 0:00:26.797 ********** 2026-03-02 00:56:38.178419 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.178423 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.178426 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.178430 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.178434 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.178437 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.178441 | orchestrator | 2026-03-02 00:56:38.178445 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-02 00:56:38.178449 | orchestrator | Monday 02 March 2026 00:46:37 +0000 (0:00:00.991) 0:00:27.789 ********** 2026-03-02 00:56:38.178453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de3a51bd--019b--527a--8dea--ff4c94e5d801-osd--block--de3a51bd--019b--527a--8dea--ff4c94e5d801', 'dm-uuid-LVM-X7cvTTEIT9w4bQ22AYz8LHyJx3B21eeHRkfsquG31QVc2M6iZhDe3TjYXiYe1V7w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a84d633--ba5b--5049--b6da--2482ee8b3083-osd--block--8a84d633--ba5b--5049--b6da--2482ee8b3083', 'dm-uuid-LVM-0raIyonegJuhMDcTKEhe4ST5v39sRW18JPOuu0DO5SvF8e5nlgH3QJUQBYP7184M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1d64d47--37ed--5019--b7d5--718691437d08-osd--block--c1d64d47--37ed--5019--b7d5--718691437d08', 'dm-uuid-LVM-C7wI1BFuiaw8aSvNbWcEoZ1EQsbpTxTnNfm95Z36zsnDlGBAxIUPcNHZvHLHwecj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3d7235d6--f117--525f--ba2d--9ab371851486-osd--block--3d7235d6--f117--525f--ba2d--9ab371851486', 'dm-uuid-LVM-RHsRT0YPyFvhCvBP6SDzu5rOjXWQZDRQJGf4somc7uro0NlBFNAFPACPiQn8QAtF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--271875e3--8908--5e0e--b413--64afee9519da-osd--block--271875e3--8908--5e0e--b413--64afee9519da', 'dm-uuid-LVM-WNv815OLXXMZRQbtiKCV4Kr3DgLyU9EOVlEc6MsXQpd8yGIWlVJyJqw6pfocxWsk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52125f52--6af3--5290--9fed--9584660c39a2-osd--block--52125f52--6af3--5290--9fed--9584660c39a2', 'dm-uuid-LVM-88ZyFcat0F1j3lRWAWVpLMTFRXW0sRdGtrjcGy2UECAST2MzryQfRyInmupddH55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part1', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part14', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part15', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part16', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c1d64d47--37ed--5019--b7d5--718691437d08-osd--block--c1d64d47--37ed--5019--b7d5--718691437d08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m5OWFx-qV6J-SOnf-y3AO-7CMl-3bkk-865ljm', 'scsi-0QEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351', 'scsi-SQEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3d7235d6--f117--525f--ba2d--9ab371851486-osd--block--3d7235d6--f117--525f--ba2d--9ab371851486'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fExIFV-GGb0-7Kbe-xUmS-cmPa-fq8j-EcF8yi', 'scsi-0QEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506', 'scsi-SQEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb', 'scsi-SQEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--271875e3--8908--5e0e--b413--64afee9519da-osd--block--271875e3--8908--5e0e--b413--64afee9519da'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GibLfR-79z8-CZQQ-AaT6-vyUn-mfC4-7uiS0U', 'scsi-0QEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7', 'scsi-SQEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de3a51bd--019b--527a--8dea--ff4c94e5d801-osd--block--de3a51bd--019b--527a--8dea--ff4c94e5d801'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QqUOyE-CENj-AKoE-jSXL-82K2-5S2Y-c6X3Nb', 'scsi-0QEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116', 'scsi-SQEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8a84d633--ba5b--5049--b6da--2482ee8b3083-osd--block--8a84d633--ba5b--5049--b6da--2482ee8b3083'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XuzJqd-HbZT-pgPn-w1pW-C0el-OSmi-JJyU7B', 'scsi-0QEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077', 'scsi-SQEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842', 'scsi-SQEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--52125f52--6af3--5290--9fed--9584660c39a2-osd--block--52125f52--6af3--5290--9fed--9584660c39a2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3udVJJ-Z4dW-1pEk-THR1-4Uh3-fd0B-MD3V3e', 'scsi-0QEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311', 'scsi-SQEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed', 'scsi-SQEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part1', 'scsi-SQEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part14', 'scsi-SQEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part15', 'scsi-SQEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part16', 'scsi-SQEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f', 'scsi-SQEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178816 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.178852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da', 'scsi-SQEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part1', 'scsi-SQEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part14', 'scsi-SQEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part15', 'scsi-SQEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part16', 'scsi-SQEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.178964 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.178969 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.178973 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.178977 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.178982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.178995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.179005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.179010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.179015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.179019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:56:38.179027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1', 'scsi-SQEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part1', 'scsi-SQEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part14', 'scsi-SQEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part15', 'scsi-SQEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part16', 'scsi-SQEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.179037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:56:38.179043 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.179049 | orchestrator | 2026-03-02 00:56:38.179062 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-02 00:56:38.179069 | orchestrator | Monday 02 March 2026 00:46:38 +0000 (0:00:01.311) 0:00:29.101 ********** 2026-03-02 00:56:38.179076 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--271875e3--8908--5e0e--b413--64afee9519da-osd--block--271875e3--8908--5e0e--b413--64afee9519da', 'dm-uuid-LVM-WNv815OLXXMZRQbtiKCV4Kr3DgLyU9EOVlEc6MsXQpd8yGIWlVJyJqw6pfocxWsk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52125f52--6af3--5290--9fed--9584660c39a2-osd--block--52125f52--6af3--5290--9fed--9584660c39a2', 'dm-uuid-LVM-88ZyFcat0F1j3lRWAWVpLMTFRXW0sRdGtrjcGy2UECAST2MzryQfRyInmupddH55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179094 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179101 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179368 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179423 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de3a51bd--019b--527a--8dea--ff4c94e5d801-osd--block--de3a51bd--019b--527a--8dea--ff4c94e5d801', 'dm-uuid-LVM-X7cvTTEIT9w4bQ22AYz8LHyJx3B21eeHRkfsquG31QVc2M6iZhDe3TjYXiYe1V7w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179430 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179436 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1d64d47--37ed--5019--b7d5--718691437d08-osd--block--c1d64d47--37ed--5019--b7d5--718691437d08', 'dm-uuid-LVM-C7wI1BFuiaw8aSvNbWcEoZ1EQsbpTxTnNfm95Z36zsnDlGBAxIUPcNHZvHLHwecj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179444 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a84d633--ba5b--5049--b6da--2482ee8b3083-osd--block--8a84d633--ba5b--5049--b6da--2482ee8b3083', 'dm-uuid-LVM-0raIyonegJuhMDcTKEhe4ST5v39sRW18JPOuu0DO5SvF8e5nlgH3QJUQBYP7184M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179460 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179465 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3d7235d6--f117--525f--ba2d--9ab371851486-osd--block--3d7235d6--f117--525f--ba2d--9ab371851486', 'dm-uuid-LVM-RHsRT0YPyFvhCvBP6SDzu5rOjXWQZDRQJGf4somc7uro0NlBFNAFPACPiQn8QAtF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179472 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179476 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179480 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179484 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179491 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179504 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179508 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179512 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179531 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179535 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179539 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179548 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part1', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part14', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part15', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part16', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179557 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--271875e3--8908--5e0e--b413--64afee9519da-osd--block--271875e3--8908--5e0e--b413--64afee9519da'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GibLfR-79z8-CZQQ-AaT6-vyUn-mfC4-7uiS0U', 'scsi-0QEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7', 'scsi-SQEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179566 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c1d64d47--37ed--5019--b7d5--718691437d08-osd--block--c1d64d47--37ed--5019--b7d5--718691437d08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m5OWFx-qV6J-SOnf-y3AO-7CMl-3bkk-865ljm', 'scsi-0QEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351', 'scsi-SQEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179570 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179624 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3d7235d6--f117--525f--ba2d--9ab371851486-osd--block--3d7235d6--f117--525f--ba2d--9ab371851486'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fExIFV-GGb0-7Kbe-xUmS-cmPa-fq8j-EcF8yi', 'scsi-0QEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506', 'scsi-SQEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179633 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179637 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb', 'scsi-SQEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179641 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179652 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--52125f52--6af3--5290--9fed--9584660c39a2-osd--block--52125f52--6af3--5290--9fed--9584660c39a2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3udVJJ-Z4dW-1pEk-THR1-4Uh3-fd0B-MD3V3e', 'scsi-0QEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311', 'scsi-SQEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179665 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179669 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.179673 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f', 'scsi-SQEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179682 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179698 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179702 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179722 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179726 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179730 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.179734 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179744 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179777 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179782 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179786 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de3a51bd--019b--527a--8dea--ff4c94e5d801-osd--block--de3a51bd--019b--527a--8dea--ff4c94e5d801'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QqUOyE-CENj-AKoE-jSXL-82K2-5S2Y-c6X3Nb', 'scsi-0QEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116', 'scsi-SQEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179798 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179802 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179806 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179811 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed', 'scsi-SQEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part1', 'scsi-SQEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part14', 'scsi-SQEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part15', 'scsi-SQEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part16', 'scsi-SQEMU_QEMU_HARDDISK_61056674-f458-4118-8944-0d8cbda618ed-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179822 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8a84d633--ba5b--5049--b6da--2482ee8b3083-osd--block--8a84d633--ba5b--5049--b6da--2482ee8b3083'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XuzJqd-HbZT-pgPn-w1pW-C0el-OSmi-JJyU7B', 'scsi-0QEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077', 'scsi-SQEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179827 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179831 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179835 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179839 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179847 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842', 'scsi-SQEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179851 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179856 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179860 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179864 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179868 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179912 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179935 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1', 'scsi-SQEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part1', 'scsi-SQEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part14', 'scsi-SQEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part15', 'scsi-SQEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part16', 'scsi-SQEMU_QEMU_HARDDISK_143a36a5-297c-4e67-888b-5cce7baa02e1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179942 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179949 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179963 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179972 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.179979 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.179985 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.179992 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da', 'scsi-SQEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part1', 'scsi-SQEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part14', 'scsi-SQEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part15', 'scsi-SQEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part16', 'scsi-SQEMU_QEMU_HARDDISK_71a7a740-2bf8-4a84-80fa-758afac521da-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.180003 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:56:38.180010 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.180016 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.180020 | orchestrator | 2026-03-02 00:56:38.180027 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-02 00:56:38.180034 | orchestrator | Monday 02 March 2026 00:46:39 +0000 (0:00:01.049) 0:00:30.150 ********** 2026-03-02 00:56:38.180042 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.180052 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.180058 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.180064 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.180071 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.180078 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.180089 | orchestrator | 2026-03-02 00:56:38.180096 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-02 00:56:38.180100 | orchestrator | Monday 02 March 2026 00:46:40 +0000 (0:00:01.397) 0:00:31.548 ********** 2026-03-02 00:56:38.180104 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.180109 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.180113 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.180118 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.180122 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.180126 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.180130 | orchestrator | 2026-03-02 00:56:38.180135 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-02 00:56:38.180139 | orchestrator | Monday 02 March 2026 00:46:41 +0000 (0:00:00.637) 0:00:32.185 ********** 2026-03-02 00:56:38.180143 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.180148 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.180152 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.180156 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.180160 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.180165 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.180169 | orchestrator | 2026-03-02 00:56:38.180174 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-02 00:56:38.180178 | orchestrator | Monday 02 March 2026 00:46:42 +0000 (0:00:00.977) 0:00:33.163 ********** 2026-03-02 00:56:38.180182 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.180186 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.180190 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.180194 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.180198 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.180202 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.180205 | orchestrator | 2026-03-02 00:56:38.180209 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-02 00:56:38.180213 | orchestrator | Monday 02 March 2026 00:46:42 +0000 (0:00:00.518) 0:00:33.682 ********** 2026-03-02 00:56:38.180217 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.180220 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.180227 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.180231 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.180235 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.180239 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.180242 | orchestrator | 2026-03-02 00:56:38.180246 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-02 00:56:38.180250 | orchestrator | Monday 02 March 2026 00:46:43 +0000 (0:00:00.728) 0:00:34.411 ********** 2026-03-02 00:56:38.180254 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.180257 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.180261 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.180264 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.180268 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.180272 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.180276 | orchestrator | 2026-03-02 00:56:38.180279 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-02 00:56:38.180283 | orchestrator | Monday 02 March 2026 00:46:44 +0000 (0:00:00.809) 0:00:35.220 ********** 2026-03-02 00:56:38.180287 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-02 00:56:38.180291 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-02 00:56:38.180294 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-02 00:56:38.180345 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-02 00:56:38.180349 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-02 00:56:38.180353 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-02 00:56:38.180357 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-02 00:56:38.180360 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-02 00:56:38.180364 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-02 00:56:38.180368 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-02 00:56:38.180371 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-02 00:56:38.180375 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-02 00:56:38.180379 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-02 00:56:38.180383 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-02 00:56:38.180386 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-02 00:56:38.180390 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-02 00:56:38.180394 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-02 00:56:38.180397 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-02 00:56:38.180401 | orchestrator | 2026-03-02 00:56:38.180405 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-02 00:56:38.180409 | orchestrator | Monday 02 March 2026 00:46:47 +0000 (0:00:02.803) 0:00:38.024 ********** 2026-03-02 00:56:38.180412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-02 00:56:38.180416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-02 00:56:38.180420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-02 00:56:38.180424 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-02 00:56:38.180427 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-02 00:56:38.180431 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-02 00:56:38.180435 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.180439 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-02 00:56:38.180445 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-02 00:56:38.180449 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-02 00:56:38.180453 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.180457 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-02 00:56:38.180460 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-02 00:56:38.180470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-02 00:56:38.180474 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.180478 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-02 00:56:38.180482 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-02 00:56:38.180486 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-02 00:56:38.180489 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.180493 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.180497 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-02 00:56:38.180500 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-02 00:56:38.180504 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-02 00:56:38.180508 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.180511 | orchestrator | 2026-03-02 00:56:38.180515 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-02 00:56:38.180519 | orchestrator | Monday 02 March 2026 00:46:48 +0000 (0:00:01.114) 0:00:39.139 ********** 2026-03-02 00:56:38.180523 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.180527 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.180530 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.180534 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.180538 | orchestrator | 2026-03-02 00:56:38.180542 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-02 00:56:38.180546 | orchestrator | Monday 02 March 2026 00:46:49 +0000 (0:00:01.046) 0:00:40.185 ********** 2026-03-02 00:56:38.180550 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.180554 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.180557 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.180561 | orchestrator | 2026-03-02 00:56:38.180565 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-02 00:56:38.180569 | orchestrator | Monday 02 March 2026 00:46:49 +0000 (0:00:00.301) 0:00:40.487 ********** 2026-03-02 00:56:38.180572 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.180576 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.180580 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.180583 | orchestrator | 2026-03-02 00:56:38.180587 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-02 00:56:38.180591 | orchestrator | Monday 02 March 2026 00:46:50 +0000 (0:00:00.366) 0:00:40.854 ********** 2026-03-02 00:56:38.180595 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.180598 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.180602 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.180606 | orchestrator | 2026-03-02 00:56:38.180609 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-02 00:56:38.180613 | orchestrator | Monday 02 March 2026 00:46:50 +0000 (0:00:00.860) 0:00:41.715 ********** 2026-03-02 00:56:38.180617 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.180620 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.180624 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.180628 | orchestrator | 2026-03-02 00:56:38.180632 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-02 00:56:38.180635 | orchestrator | Monday 02 March 2026 00:46:51 +0000 (0:00:00.762) 0:00:42.478 ********** 2026-03-02 00:56:38.180639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.180645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.180654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.180662 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.180668 | orchestrator | 2026-03-02 00:56:38.180675 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-02 00:56:38.180686 | orchestrator | Monday 02 March 2026 00:46:52 +0000 (0:00:00.627) 0:00:43.105 ********** 2026-03-02 00:56:38.180693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.180700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.180704 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.180708 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.180711 | orchestrator | 2026-03-02 00:56:38.180715 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-02 00:56:38.180719 | orchestrator | Monday 02 March 2026 00:46:52 +0000 (0:00:00.650) 0:00:43.756 ********** 2026-03-02 00:56:38.180723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.180726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.180730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.180734 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.180737 | orchestrator | 2026-03-02 00:56:38.180741 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-02 00:56:38.180745 | orchestrator | Monday 02 March 2026 00:46:53 +0000 (0:00:00.413) 0:00:44.170 ********** 2026-03-02 00:56:38.180749 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.180812 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.180820 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.180824 | orchestrator | 2026-03-02 00:56:38.180828 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-02 00:56:38.180831 | orchestrator | Monday 02 March 2026 00:46:53 +0000 (0:00:00.402) 0:00:44.572 ********** 2026-03-02 00:56:38.180835 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-02 00:56:38.180839 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-02 00:56:38.181022 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-02 00:56:38.181038 | orchestrator | 2026-03-02 00:56:38.181189 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-02 00:56:38.181194 | orchestrator | Monday 02 March 2026 00:46:54 +0000 (0:00:00.687) 0:00:45.260 ********** 2026-03-02 00:56:38.181198 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-02 00:56:38.181205 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-02 00:56:38.181209 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-02 00:56:38.181232 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-02 00:56:38.181237 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-02 00:56:38.181241 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-02 00:56:38.181245 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-02 00:56:38.181249 | orchestrator | 2026-03-02 00:56:38.181253 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-02 00:56:38.181256 | orchestrator | Monday 02 March 2026 00:46:55 +0000 (0:00:00.881) 0:00:46.141 ********** 2026-03-02 00:56:38.181260 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-02 00:56:38.181264 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-02 00:56:38.181268 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-02 00:56:38.181271 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-02 00:56:38.181275 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-02 00:56:38.181279 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-02 00:56:38.181283 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-02 00:56:38.181292 | orchestrator | 2026-03-02 00:56:38.181295 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-02 00:56:38.181299 | orchestrator | Monday 02 March 2026 00:46:57 +0000 (0:00:02.467) 0:00:48.609 ********** 2026-03-02 00:56:38.181303 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.181307 | orchestrator | 2026-03-02 00:56:38.181311 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-02 00:56:38.181315 | orchestrator | Monday 02 March 2026 00:46:59 +0000 (0:00:01.292) 0:00:49.901 ********** 2026-03-02 00:56:38.181319 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.181323 | orchestrator | 2026-03-02 00:56:38.181326 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-02 00:56:38.181330 | orchestrator | Monday 02 March 2026 00:47:00 +0000 (0:00:01.364) 0:00:51.266 ********** 2026-03-02 00:56:38.181334 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.181338 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.181341 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.181345 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.181354 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.181362 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.181366 | orchestrator | 2026-03-02 00:56:38.181370 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-02 00:56:38.181374 | orchestrator | Monday 02 March 2026 00:47:01 +0000 (0:00:01.316) 0:00:52.582 ********** 2026-03-02 00:56:38.181377 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.181381 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.181385 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.181389 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.181392 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.181396 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.181400 | orchestrator | 2026-03-02 00:56:38.181404 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-02 00:56:38.181407 | orchestrator | Monday 02 March 2026 00:47:02 +0000 (0:00:00.951) 0:00:53.534 ********** 2026-03-02 00:56:38.181411 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.181415 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.181419 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.181422 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.181426 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.181430 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.181434 | orchestrator | 2026-03-02 00:56:38.181437 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-02 00:56:38.181441 | orchestrator | Monday 02 March 2026 00:47:03 +0000 (0:00:00.892) 0:00:54.426 ********** 2026-03-02 00:56:38.181445 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.181449 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.181452 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.181456 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.181460 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.181464 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.181467 | orchestrator | 2026-03-02 00:56:38.181471 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-02 00:56:38.181475 | orchestrator | Monday 02 March 2026 00:47:04 +0000 (0:00:00.796) 0:00:55.223 ********** 2026-03-02 00:56:38.181479 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.181482 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.181486 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.181490 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.181494 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.181508 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.181512 | orchestrator | 2026-03-02 00:56:38.181519 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-02 00:56:38.181523 | orchestrator | Monday 02 March 2026 00:47:05 +0000 (0:00:01.076) 0:00:56.300 ********** 2026-03-02 00:56:38.181526 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.181530 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.181536 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.181540 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.181544 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.181547 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.181551 | orchestrator | 2026-03-02 00:56:38.181555 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-02 00:56:38.181559 | orchestrator | Monday 02 March 2026 00:47:06 +0000 (0:00:00.680) 0:00:56.980 ********** 2026-03-02 00:56:38.181562 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.181566 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.181570 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.181573 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.181577 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.181581 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.181585 | orchestrator | 2026-03-02 00:56:38.181588 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-02 00:56:38.181592 | orchestrator | Monday 02 March 2026 00:47:07 +0000 (0:00:01.326) 0:00:58.307 ********** 2026-03-02 00:56:38.181596 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.181600 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.181603 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.181607 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.181611 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.181614 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.181619 | orchestrator | 2026-03-02 00:56:38.181625 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-02 00:56:38.181631 | orchestrator | Monday 02 March 2026 00:47:08 +0000 (0:00:00.989) 0:00:59.296 ********** 2026-03-02 00:56:38.181641 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.181648 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.181653 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.181659 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.181665 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.181671 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.181676 | orchestrator | 2026-03-02 00:56:38.181682 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-02 00:56:38.181688 | orchestrator | Monday 02 March 2026 00:47:09 +0000 (0:00:01.374) 0:01:00.671 ********** 2026-03-02 00:56:38.181694 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.181700 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.181706 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.181712 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.181935 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.181944 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.181947 | orchestrator | 2026-03-02 00:56:38.181951 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-02 00:56:38.181955 | orchestrator | Monday 02 March 2026 00:47:11 +0000 (0:00:01.496) 0:01:02.168 ********** 2026-03-02 00:56:38.181959 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.181963 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.181967 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.181970 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.181974 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.181978 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.181982 | orchestrator | 2026-03-02 00:56:38.181986 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-02 00:56:38.181989 | orchestrator | Monday 02 March 2026 00:47:12 +0000 (0:00:01.396) 0:01:03.565 ********** 2026-03-02 00:56:38.181993 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.182004 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.182008 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.182035 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182040 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182044 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182048 | orchestrator | 2026-03-02 00:56:38.182052 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-02 00:56:38.182056 | orchestrator | Monday 02 March 2026 00:47:13 +0000 (0:00:00.654) 0:01:04.220 ********** 2026-03-02 00:56:38.182059 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.182063 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.182067 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.182071 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182074 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182078 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182082 | orchestrator | 2026-03-02 00:56:38.182086 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-02 00:56:38.182089 | orchestrator | Monday 02 March 2026 00:47:14 +0000 (0:00:00.858) 0:01:05.078 ********** 2026-03-02 00:56:38.182093 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.182097 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.182101 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.182104 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182108 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182112 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182115 | orchestrator | 2026-03-02 00:56:38.182119 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-02 00:56:38.182123 | orchestrator | Monday 02 March 2026 00:47:14 +0000 (0:00:00.601) 0:01:05.680 ********** 2026-03-02 00:56:38.182127 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182131 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.182134 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.182138 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182142 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182145 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182149 | orchestrator | 2026-03-02 00:56:38.182153 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-02 00:56:38.182157 | orchestrator | Monday 02 March 2026 00:47:15 +0000 (0:00:00.804) 0:01:06.485 ********** 2026-03-02 00:56:38.182160 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182164 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.182168 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.182171 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182216 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182222 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182226 | orchestrator | 2026-03-02 00:56:38.182230 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-02 00:56:38.182234 | orchestrator | Monday 02 March 2026 00:47:16 +0000 (0:00:00.718) 0:01:07.203 ********** 2026-03-02 00:56:38.182238 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182245 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.182249 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.182253 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.182257 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.182260 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.182264 | orchestrator | 2026-03-02 00:56:38.182268 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-02 00:56:38.182272 | orchestrator | Monday 02 March 2026 00:47:17 +0000 (0:00:00.701) 0:01:07.905 ********** 2026-03-02 00:56:38.182275 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.182279 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.182283 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.182286 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.182290 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.182297 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.182301 | orchestrator | 2026-03-02 00:56:38.182305 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-02 00:56:38.182308 | orchestrator | Monday 02 March 2026 00:47:17 +0000 (0:00:00.628) 0:01:08.533 ********** 2026-03-02 00:56:38.182312 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.182316 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.182319 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.182323 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.182327 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.182330 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.182334 | orchestrator | 2026-03-02 00:56:38.182338 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-02 00:56:38.182342 | orchestrator | Monday 02 March 2026 00:47:19 +0000 (0:00:01.327) 0:01:09.861 ********** 2026-03-02 00:56:38.182345 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.182349 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.182353 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.182356 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.182360 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.182364 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.182368 | orchestrator | 2026-03-02 00:56:38.182371 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-02 00:56:38.182375 | orchestrator | Monday 02 March 2026 00:47:21 +0000 (0:00:02.143) 0:01:12.004 ********** 2026-03-02 00:56:38.182379 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.182383 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.182386 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.182390 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.182394 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.182397 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.182401 | orchestrator | 2026-03-02 00:56:38.182405 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-02 00:56:38.182408 | orchestrator | Monday 02 March 2026 00:47:23 +0000 (0:00:02.534) 0:01:14.538 ********** 2026-03-02 00:56:38.182412 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.182416 | orchestrator | 2026-03-02 00:56:38.182420 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-02 00:56:38.182424 | orchestrator | Monday 02 March 2026 00:47:25 +0000 (0:00:01.272) 0:01:15.810 ********** 2026-03-02 00:56:38.182428 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182431 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.182435 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.182439 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182442 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182446 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182450 | orchestrator | 2026-03-02 00:56:38.182454 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-02 00:56:38.182457 | orchestrator | Monday 02 March 2026 00:47:25 +0000 (0:00:00.687) 0:01:16.498 ********** 2026-03-02 00:56:38.182461 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182465 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.182468 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.182472 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182476 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182479 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182483 | orchestrator | 2026-03-02 00:56:38.182487 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-02 00:56:38.182491 | orchestrator | Monday 02 March 2026 00:47:26 +0000 (0:00:00.723) 0:01:17.222 ********** 2026-03-02 00:56:38.182494 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-02 00:56:38.182501 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-02 00:56:38.182504 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-02 00:56:38.182508 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-02 00:56:38.182512 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-02 00:56:38.182516 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-02 00:56:38.182523 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-02 00:56:38.182531 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-02 00:56:38.182541 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-02 00:56:38.182547 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-02 00:56:38.182571 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-02 00:56:38.182577 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-02 00:56:38.182583 | orchestrator | 2026-03-02 00:56:38.182588 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-02 00:56:38.182597 | orchestrator | Monday 02 March 2026 00:47:27 +0000 (0:00:01.181) 0:01:18.403 ********** 2026-03-02 00:56:38.182602 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.182608 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.182614 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.182620 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.182625 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.182631 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.182637 | orchestrator | 2026-03-02 00:56:38.182644 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-02 00:56:38.182649 | orchestrator | Monday 02 March 2026 00:47:28 +0000 (0:00:01.169) 0:01:19.573 ********** 2026-03-02 00:56:38.182653 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182657 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.182660 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.182664 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182668 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182671 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182675 | orchestrator | 2026-03-02 00:56:38.182679 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-02 00:56:38.182683 | orchestrator | Monday 02 March 2026 00:47:29 +0000 (0:00:00.609) 0:01:20.182 ********** 2026-03-02 00:56:38.182686 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182690 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.182694 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.182697 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182701 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182705 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182708 | orchestrator | 2026-03-02 00:56:38.182712 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-02 00:56:38.182716 | orchestrator | Monday 02 March 2026 00:47:30 +0000 (0:00:00.810) 0:01:20.993 ********** 2026-03-02 00:56:38.182720 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182723 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.182727 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.182731 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182735 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182738 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182742 | orchestrator | 2026-03-02 00:56:38.182746 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-02 00:56:38.182750 | orchestrator | Monday 02 March 2026 00:47:30 +0000 (0:00:00.508) 0:01:21.501 ********** 2026-03-02 00:56:38.182767 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.182771 | orchestrator | 2026-03-02 00:56:38.182775 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-02 00:56:38.182779 | orchestrator | Monday 02 March 2026 00:47:31 +0000 (0:00:00.991) 0:01:22.493 ********** 2026-03-02 00:56:38.182782 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.182786 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.182790 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.182794 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.182797 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.182801 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.182805 | orchestrator | 2026-03-02 00:56:38.182808 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-02 00:56:38.182812 | orchestrator | Monday 02 March 2026 00:48:12 +0000 (0:00:40.818) 0:02:03.311 ********** 2026-03-02 00:56:38.182816 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-02 00:56:38.182819 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-02 00:56:38.182823 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-02 00:56:38.182827 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182831 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-02 00:56:38.182834 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-02 00:56:38.182838 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-02 00:56:38.182842 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.182845 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-02 00:56:38.182849 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-02 00:56:38.182853 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-02 00:56:38.182857 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.182860 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-02 00:56:38.182864 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-02 00:56:38.182868 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-02 00:56:38.182872 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182875 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-02 00:56:38.182879 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-02 00:56:38.182883 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-02 00:56:38.182887 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182904 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-02 00:56:38.182909 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-02 00:56:38.182913 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-02 00:56:38.182916 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182920 | orchestrator | 2026-03-02 00:56:38.182926 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-02 00:56:38.182930 | orchestrator | Monday 02 March 2026 00:48:13 +0000 (0:00:00.531) 0:02:03.843 ********** 2026-03-02 00:56:38.182934 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182938 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.182941 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.182945 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.182949 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.182955 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.182959 | orchestrator | 2026-03-02 00:56:38.182963 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-02 00:56:38.182967 | orchestrator | Monday 02 March 2026 00:48:13 +0000 (0:00:00.627) 0:02:04.471 ********** 2026-03-02 00:56:38.182970 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182974 | orchestrator | 2026-03-02 00:56:38.182978 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-02 00:56:38.182981 | orchestrator | Monday 02 March 2026 00:48:13 +0000 (0:00:00.134) 0:02:04.605 ********** 2026-03-02 00:56:38.182985 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.182989 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.182993 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.182997 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183000 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183004 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183008 | orchestrator | 2026-03-02 00:56:38.183012 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-02 00:56:38.183015 | orchestrator | Monday 02 March 2026 00:48:14 +0000 (0:00:00.504) 0:02:05.110 ********** 2026-03-02 00:56:38.183019 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.183023 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.183027 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.183030 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183034 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183038 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183041 | orchestrator | 2026-03-02 00:56:38.183045 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-02 00:56:38.183049 | orchestrator | Monday 02 March 2026 00:48:15 +0000 (0:00:00.693) 0:02:05.803 ********** 2026-03-02 00:56:38.183053 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.183056 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.183060 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.183064 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183067 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183071 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183075 | orchestrator | 2026-03-02 00:56:38.183078 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-02 00:56:38.183082 | orchestrator | Monday 02 March 2026 00:48:15 +0000 (0:00:00.525) 0:02:06.328 ********** 2026-03-02 00:56:38.183086 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.183090 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.183093 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.183097 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.183101 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.183105 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.183108 | orchestrator | 2026-03-02 00:56:38.183112 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-02 00:56:38.183116 | orchestrator | Monday 02 March 2026 00:48:17 +0000 (0:00:02.296) 0:02:08.625 ********** 2026-03-02 00:56:38.183120 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.183123 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.183127 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.183131 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.183134 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.183138 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.183142 | orchestrator | 2026-03-02 00:56:38.183146 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-02 00:56:38.183149 | orchestrator | Monday 02 March 2026 00:48:18 +0000 (0:00:00.517) 0:02:09.142 ********** 2026-03-02 00:56:38.183154 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.183158 | orchestrator | 2026-03-02 00:56:38.183164 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-02 00:56:38.183168 | orchestrator | Monday 02 March 2026 00:48:19 +0000 (0:00:01.067) 0:02:10.210 ********** 2026-03-02 00:56:38.183171 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.183175 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.183179 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.183183 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183186 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183190 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183194 | orchestrator | 2026-03-02 00:56:38.183197 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-02 00:56:38.183201 | orchestrator | Monday 02 March 2026 00:48:20 +0000 (0:00:00.685) 0:02:10.895 ********** 2026-03-02 00:56:38.183205 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.183209 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.183212 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.183216 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183220 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183224 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183227 | orchestrator | 2026-03-02 00:56:38.183231 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-02 00:56:38.183235 | orchestrator | Monday 02 March 2026 00:48:20 +0000 (0:00:00.611) 0:02:11.507 ********** 2026-03-02 00:56:38.183239 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.183242 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.183257 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.183262 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183266 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183269 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183273 | orchestrator | 2026-03-02 00:56:38.183277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-02 00:56:38.183281 | orchestrator | Monday 02 March 2026 00:48:21 +0000 (0:00:00.690) 0:02:12.198 ********** 2026-03-02 00:56:38.183287 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.183290 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.183294 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.183298 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183301 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183305 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183309 | orchestrator | 2026-03-02 00:56:38.183313 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-02 00:56:38.183316 | orchestrator | Monday 02 March 2026 00:48:22 +0000 (0:00:00.618) 0:02:12.816 ********** 2026-03-02 00:56:38.183320 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.183324 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.183327 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.183331 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183335 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183338 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183342 | orchestrator | 2026-03-02 00:56:38.183346 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-02 00:56:38.183350 | orchestrator | Monday 02 March 2026 00:48:22 +0000 (0:00:00.653) 0:02:13.469 ********** 2026-03-02 00:56:38.183353 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.183357 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.183361 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.183364 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183368 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183372 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183376 | orchestrator | 2026-03-02 00:56:38.183379 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-02 00:56:38.183383 | orchestrator | Monday 02 March 2026 00:48:23 +0000 (0:00:00.520) 0:02:13.990 ********** 2026-03-02 00:56:38.183391 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.183395 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.183398 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.183402 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183406 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183409 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183413 | orchestrator | 2026-03-02 00:56:38.183417 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-02 00:56:38.183420 | orchestrator | Monday 02 March 2026 00:48:23 +0000 (0:00:00.750) 0:02:14.740 ********** 2026-03-02 00:56:38.183424 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.183428 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.183432 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.183435 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183439 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183443 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183446 | orchestrator | 2026-03-02 00:56:38.183450 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-02 00:56:38.183454 | orchestrator | Monday 02 March 2026 00:48:24 +0000 (0:00:00.546) 0:02:15.287 ********** 2026-03-02 00:56:38.183458 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.183461 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.183465 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.183469 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.183472 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.183476 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.183480 | orchestrator | 2026-03-02 00:56:38.183484 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-02 00:56:38.183487 | orchestrator | Monday 02 March 2026 00:48:25 +0000 (0:00:01.024) 0:02:16.311 ********** 2026-03-02 00:56:38.183491 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.183495 | orchestrator | 2026-03-02 00:56:38.183499 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-02 00:56:38.183502 | orchestrator | Monday 02 March 2026 00:48:26 +0000 (0:00:01.070) 0:02:17.382 ********** 2026-03-02 00:56:38.183506 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-02 00:56:38.183510 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-02 00:56:38.183514 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-02 00:56:38.183518 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-02 00:56:38.183521 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-02 00:56:38.183525 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-02 00:56:38.183529 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-02 00:56:38.183533 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-02 00:56:38.183536 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-02 00:56:38.183540 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-02 00:56:38.183544 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-02 00:56:38.183547 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-02 00:56:38.183551 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-02 00:56:38.183555 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-02 00:56:38.183559 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-02 00:56:38.183563 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-02 00:56:38.183566 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-02 00:56:38.183570 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-02 00:56:38.183585 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-02 00:56:38.183592 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-02 00:56:38.183596 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-02 00:56:38.183600 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-02 00:56:38.183603 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-02 00:56:38.183609 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-02 00:56:38.183613 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-02 00:56:38.183617 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-02 00:56:38.183620 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-02 00:56:38.183624 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-02 00:56:38.183628 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-02 00:56:38.183631 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-02 00:56:38.183635 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-02 00:56:38.183639 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-02 00:56:38.183642 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-02 00:56:38.183646 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-02 00:56:38.183650 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-02 00:56:38.183653 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-02 00:56:38.183657 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-02 00:56:38.183661 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-02 00:56:38.183665 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-02 00:56:38.183668 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-02 00:56:38.183672 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-02 00:56:38.183676 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-02 00:56:38.183679 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-02 00:56:38.183683 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-02 00:56:38.183687 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-02 00:56:38.183691 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-02 00:56:38.183694 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-02 00:56:38.183698 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-02 00:56:38.183702 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-02 00:56:38.183706 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-02 00:56:38.183709 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-02 00:56:38.183713 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-02 00:56:38.183717 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-02 00:56:38.183720 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-02 00:56:38.183724 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-02 00:56:38.183728 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-02 00:56:38.183731 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-02 00:56:38.183735 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-02 00:56:38.183739 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-02 00:56:38.183743 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-02 00:56:38.183747 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-02 00:56:38.183767 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-02 00:56:38.183771 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-02 00:56:38.183775 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-02 00:56:38.183779 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-02 00:56:38.183782 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-02 00:56:38.183786 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-02 00:56:38.183790 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-02 00:56:38.183794 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-02 00:56:38.183797 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-02 00:56:38.183801 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-02 00:56:38.183805 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-02 00:56:38.183808 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-02 00:56:38.183812 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-02 00:56:38.183816 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-02 00:56:38.183819 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-02 00:56:38.183836 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-02 00:56:38.183841 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-02 00:56:38.183845 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-02 00:56:38.183848 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-02 00:56:38.183855 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-02 00:56:38.183859 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-02 00:56:38.183863 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-02 00:56:38.183866 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-02 00:56:38.183870 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-02 00:56:38.183874 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-02 00:56:38.183878 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-02 00:56:38.183882 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-02 00:56:38.183885 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-02 00:56:38.183889 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-02 00:56:38.183893 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-02 00:56:38.183897 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-02 00:56:38.183900 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-02 00:56:38.183904 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-02 00:56:38.183908 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-02 00:56:38.183912 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-02 00:56:38.183915 | orchestrator | 2026-03-02 00:56:38.183919 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-02 00:56:38.183923 | orchestrator | Monday 02 March 2026 00:48:33 +0000 (0:00:06.550) 0:02:23.932 ********** 2026-03-02 00:56:38.183927 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.183931 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.183934 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.183939 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.183947 | orchestrator | 2026-03-02 00:56:38.183951 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-02 00:56:38.183955 | orchestrator | Monday 02 March 2026 00:48:34 +0000 (0:00:00.931) 0:02:24.864 ********** 2026-03-02 00:56:38.183959 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.183962 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.183966 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.183970 | orchestrator | 2026-03-02 00:56:38.183974 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-02 00:56:38.183978 | orchestrator | Monday 02 March 2026 00:48:35 +0000 (0:00:00.957) 0:02:25.822 ********** 2026-03-02 00:56:38.183981 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.183985 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.183989 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.183993 | orchestrator | 2026-03-02 00:56:38.183996 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-02 00:56:38.184000 | orchestrator | Monday 02 March 2026 00:48:36 +0000 (0:00:01.135) 0:02:26.957 ********** 2026-03-02 00:56:38.184004 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.184008 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.184012 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.184015 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184019 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184023 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184026 | orchestrator | 2026-03-02 00:56:38.184030 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-02 00:56:38.184034 | orchestrator | Monday 02 March 2026 00:48:36 +0000 (0:00:00.669) 0:02:27.627 ********** 2026-03-02 00:56:38.184038 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.184042 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.184045 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.184049 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184053 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184057 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184061 | orchestrator | 2026-03-02 00:56:38.184064 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-02 00:56:38.184068 | orchestrator | Monday 02 March 2026 00:48:37 +0000 (0:00:00.745) 0:02:28.373 ********** 2026-03-02 00:56:38.184072 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184076 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184080 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184083 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184087 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184091 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184095 | orchestrator | 2026-03-02 00:56:38.184111 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-02 00:56:38.184116 | orchestrator | Monday 02 March 2026 00:48:38 +0000 (0:00:00.587) 0:02:28.960 ********** 2026-03-02 00:56:38.184120 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184123 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184127 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184131 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184134 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184144 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184148 | orchestrator | 2026-03-02 00:56:38.184152 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-02 00:56:38.184156 | orchestrator | Monday 02 March 2026 00:48:38 +0000 (0:00:00.643) 0:02:29.603 ********** 2026-03-02 00:56:38.184160 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184164 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184167 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184171 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184175 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184179 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184182 | orchestrator | 2026-03-02 00:56:38.184187 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-02 00:56:38.184190 | orchestrator | Monday 02 March 2026 00:48:39 +0000 (0:00:00.506) 0:02:30.110 ********** 2026-03-02 00:56:38.184194 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184204 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184208 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184212 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184219 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184223 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184227 | orchestrator | 2026-03-02 00:56:38.184231 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-02 00:56:38.184235 | orchestrator | Monday 02 March 2026 00:48:40 +0000 (0:00:00.706) 0:02:30.816 ********** 2026-03-02 00:56:38.184238 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184242 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184246 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184250 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184253 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184257 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184261 | orchestrator | 2026-03-02 00:56:38.184265 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-02 00:56:38.184269 | orchestrator | Monday 02 March 2026 00:48:40 +0000 (0:00:00.532) 0:02:31.348 ********** 2026-03-02 00:56:38.184273 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184276 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184280 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184284 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184288 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184291 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184295 | orchestrator | 2026-03-02 00:56:38.184299 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-02 00:56:38.184303 | orchestrator | Monday 02 March 2026 00:48:41 +0000 (0:00:00.692) 0:02:32.040 ********** 2026-03-02 00:56:38.184307 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184310 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184314 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184318 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.184321 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.184325 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.184329 | orchestrator | 2026-03-02 00:56:38.184333 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-02 00:56:38.184336 | orchestrator | Monday 02 March 2026 00:48:44 +0000 (0:00:03.055) 0:02:35.096 ********** 2026-03-02 00:56:38.184340 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.184344 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.184348 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.184352 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184355 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184359 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184363 | orchestrator | 2026-03-02 00:56:38.184369 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-02 00:56:38.184373 | orchestrator | Monday 02 March 2026 00:48:45 +0000 (0:00:00.743) 0:02:35.840 ********** 2026-03-02 00:56:38.184377 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.184381 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.184385 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.184388 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184392 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184396 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184399 | orchestrator | 2026-03-02 00:56:38.184403 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-02 00:56:38.184407 | orchestrator | Monday 02 March 2026 00:48:45 +0000 (0:00:00.589) 0:02:36.429 ********** 2026-03-02 00:56:38.184411 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184414 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184418 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184422 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184426 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184429 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184433 | orchestrator | 2026-03-02 00:56:38.184437 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-02 00:56:38.184441 | orchestrator | Monday 02 March 2026 00:48:46 +0000 (0:00:00.780) 0:02:37.210 ********** 2026-03-02 00:56:38.184445 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.184448 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.184452 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.184456 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184475 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184479 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184483 | orchestrator | 2026-03-02 00:56:38.184487 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-02 00:56:38.184490 | orchestrator | Monday 02 March 2026 00:48:46 +0000 (0:00:00.495) 0:02:37.705 ********** 2026-03-02 00:56:38.184498 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-02 00:56:38.184503 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-02 00:56:38.184508 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184512 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-02 00:56:38.184516 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-02 00:56:38.184520 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184524 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-02 00:56:38.184530 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-02 00:56:38.184534 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184538 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184542 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184546 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184549 | orchestrator | 2026-03-02 00:56:38.184553 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-02 00:56:38.184557 | orchestrator | Monday 02 March 2026 00:48:47 +0000 (0:00:00.821) 0:02:38.526 ********** 2026-03-02 00:56:38.184561 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184564 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184568 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184572 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184576 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184579 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184583 | orchestrator | 2026-03-02 00:56:38.184587 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-02 00:56:38.184591 | orchestrator | Monday 02 March 2026 00:48:48 +0000 (0:00:00.503) 0:02:39.030 ********** 2026-03-02 00:56:38.184594 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184598 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184602 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184606 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184609 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184613 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184617 | orchestrator | 2026-03-02 00:56:38.184621 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-02 00:56:38.184625 | orchestrator | Monday 02 March 2026 00:48:48 +0000 (0:00:00.705) 0:02:39.735 ********** 2026-03-02 00:56:38.184629 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184632 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184636 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184640 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184643 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184647 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184651 | orchestrator | 2026-03-02 00:56:38.184655 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-02 00:56:38.184658 | orchestrator | Monday 02 March 2026 00:48:49 +0000 (0:00:00.540) 0:02:40.276 ********** 2026-03-02 00:56:38.184662 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184666 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184670 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184673 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184677 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184681 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184684 | orchestrator | 2026-03-02 00:56:38.184688 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-02 00:56:38.184705 | orchestrator | Monday 02 March 2026 00:48:50 +0000 (0:00:00.774) 0:02:41.051 ********** 2026-03-02 00:56:38.184710 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184714 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.184717 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.184721 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184725 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184729 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184735 | orchestrator | 2026-03-02 00:56:38.184742 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-02 00:56:38.184746 | orchestrator | Monday 02 March 2026 00:48:50 +0000 (0:00:00.536) 0:02:41.588 ********** 2026-03-02 00:56:38.184749 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.184777 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.184781 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.184785 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184788 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184792 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184796 | orchestrator | 2026-03-02 00:56:38.184800 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-02 00:56:38.184803 | orchestrator | Monday 02 March 2026 00:48:51 +0000 (0:00:00.746) 0:02:42.334 ********** 2026-03-02 00:56:38.184807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.184811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.184815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.184819 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184822 | orchestrator | 2026-03-02 00:56:38.184826 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-02 00:56:38.184830 | orchestrator | Monday 02 March 2026 00:48:51 +0000 (0:00:00.379) 0:02:42.714 ********** 2026-03-02 00:56:38.184834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.184837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.184841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.184845 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184849 | orchestrator | 2026-03-02 00:56:38.184853 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-02 00:56:38.184856 | orchestrator | Monday 02 March 2026 00:48:52 +0000 (0:00:00.385) 0:02:43.099 ********** 2026-03-02 00:56:38.184860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.184864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.184868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.184871 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.184875 | orchestrator | 2026-03-02 00:56:38.184879 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-02 00:56:38.184883 | orchestrator | Monday 02 March 2026 00:48:52 +0000 (0:00:00.355) 0:02:43.455 ********** 2026-03-02 00:56:38.184886 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.184890 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.184894 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.184898 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184901 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184905 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184909 | orchestrator | 2026-03-02 00:56:38.184913 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-02 00:56:38.184917 | orchestrator | Monday 02 March 2026 00:48:53 +0000 (0:00:00.688) 0:02:44.143 ********** 2026-03-02 00:56:38.184920 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-02 00:56:38.184924 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-02 00:56:38.184928 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-02 00:56:38.184932 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-02 00:56:38.184935 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.184939 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-02 00:56:38.184943 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.184947 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-02 00:56:38.184950 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.184954 | orchestrator | 2026-03-02 00:56:38.184958 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-02 00:56:38.184965 | orchestrator | Monday 02 March 2026 00:48:55 +0000 (0:00:02.459) 0:02:46.603 ********** 2026-03-02 00:56:38.184968 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.184972 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.184976 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.184980 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.184984 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.184987 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.184991 | orchestrator | 2026-03-02 00:56:38.184995 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-02 00:56:38.184999 | orchestrator | Monday 02 March 2026 00:48:58 +0000 (0:00:03.037) 0:02:49.640 ********** 2026-03-02 00:56:38.185002 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.185006 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.185010 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.185014 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.185017 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.185021 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.185025 | orchestrator | 2026-03-02 00:56:38.185029 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-02 00:56:38.185032 | orchestrator | Monday 02 March 2026 00:49:00 +0000 (0:00:01.456) 0:02:51.097 ********** 2026-03-02 00:56:38.185036 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185040 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.185044 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.185048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.185052 | orchestrator | 2026-03-02 00:56:38.185056 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-02 00:56:38.185074 | orchestrator | Monday 02 March 2026 00:49:01 +0000 (0:00:01.052) 0:02:52.149 ********** 2026-03-02 00:56:38.185079 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.185083 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.185086 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.185090 | orchestrator | 2026-03-02 00:56:38.185097 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-02 00:56:38.185103 | orchestrator | Monday 02 March 2026 00:49:01 +0000 (0:00:00.301) 0:02:52.450 ********** 2026-03-02 00:56:38.185113 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.185122 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.185130 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.185136 | orchestrator | 2026-03-02 00:56:38.185142 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-02 00:56:38.185149 | orchestrator | Monday 02 March 2026 00:49:03 +0000 (0:00:01.626) 0:02:54.077 ********** 2026-03-02 00:56:38.185155 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-02 00:56:38.185162 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-02 00:56:38.185168 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-02 00:56:38.185175 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.185181 | orchestrator | 2026-03-02 00:56:38.185187 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-02 00:56:38.185194 | orchestrator | Monday 02 March 2026 00:49:03 +0000 (0:00:00.554) 0:02:54.631 ********** 2026-03-02 00:56:38.185200 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.185207 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.185212 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.185216 | orchestrator | 2026-03-02 00:56:38.185220 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-02 00:56:38.185223 | orchestrator | Monday 02 March 2026 00:49:04 +0000 (0:00:00.512) 0:02:55.143 ********** 2026-03-02 00:56:38.185228 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.185231 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.185239 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.185243 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.185246 | orchestrator | 2026-03-02 00:56:38.185250 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-02 00:56:38.185254 | orchestrator | Monday 02 March 2026 00:49:05 +0000 (0:00:00.926) 0:02:56.070 ********** 2026-03-02 00:56:38.185258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.185261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.185265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.185269 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185272 | orchestrator | 2026-03-02 00:56:38.185276 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-02 00:56:38.185280 | orchestrator | Monday 02 March 2026 00:49:05 +0000 (0:00:00.353) 0:02:56.423 ********** 2026-03-02 00:56:38.185284 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185287 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.185291 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.185294 | orchestrator | 2026-03-02 00:56:38.185298 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-02 00:56:38.185302 | orchestrator | Monday 02 March 2026 00:49:05 +0000 (0:00:00.296) 0:02:56.720 ********** 2026-03-02 00:56:38.185306 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185309 | orchestrator | 2026-03-02 00:56:38.185313 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-02 00:56:38.185317 | orchestrator | Monday 02 March 2026 00:49:06 +0000 (0:00:00.206) 0:02:56.927 ********** 2026-03-02 00:56:38.185321 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185325 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.185328 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.185332 | orchestrator | 2026-03-02 00:56:38.185335 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-02 00:56:38.185339 | orchestrator | Monday 02 March 2026 00:49:06 +0000 (0:00:00.270) 0:02:57.198 ********** 2026-03-02 00:56:38.185343 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185347 | orchestrator | 2026-03-02 00:56:38.185350 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-02 00:56:38.185354 | orchestrator | Monday 02 March 2026 00:49:06 +0000 (0:00:00.254) 0:02:57.452 ********** 2026-03-02 00:56:38.185358 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185362 | orchestrator | 2026-03-02 00:56:38.185365 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-02 00:56:38.185369 | orchestrator | Monday 02 March 2026 00:49:06 +0000 (0:00:00.244) 0:02:57.697 ********** 2026-03-02 00:56:38.185373 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185376 | orchestrator | 2026-03-02 00:56:38.185380 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-02 00:56:38.185384 | orchestrator | Monday 02 March 2026 00:49:07 +0000 (0:00:00.137) 0:02:57.835 ********** 2026-03-02 00:56:38.185388 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185391 | orchestrator | 2026-03-02 00:56:38.185395 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-02 00:56:38.185399 | orchestrator | Monday 02 March 2026 00:49:07 +0000 (0:00:00.698) 0:02:58.534 ********** 2026-03-02 00:56:38.185402 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185406 | orchestrator | 2026-03-02 00:56:38.185410 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-02 00:56:38.185413 | orchestrator | Monday 02 March 2026 00:49:08 +0000 (0:00:00.284) 0:02:58.818 ********** 2026-03-02 00:56:38.185417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.185421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.185425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.185431 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185435 | orchestrator | 2026-03-02 00:56:38.185438 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-02 00:56:38.185459 | orchestrator | Monday 02 March 2026 00:49:08 +0000 (0:00:00.525) 0:02:59.343 ********** 2026-03-02 00:56:38.185463 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.185467 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185471 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.185474 | orchestrator | 2026-03-02 00:56:38.185478 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-02 00:56:38.185485 | orchestrator | Monday 02 March 2026 00:49:08 +0000 (0:00:00.372) 0:02:59.716 ********** 2026-03-02 00:56:38.185489 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185493 | orchestrator | 2026-03-02 00:56:38.185497 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-02 00:56:38.185500 | orchestrator | Monday 02 March 2026 00:49:09 +0000 (0:00:00.186) 0:02:59.902 ********** 2026-03-02 00:56:38.185504 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185508 | orchestrator | 2026-03-02 00:56:38.185511 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-02 00:56:38.185515 | orchestrator | Monday 02 March 2026 00:49:09 +0000 (0:00:00.182) 0:03:00.085 ********** 2026-03-02 00:56:38.185519 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.185523 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.185527 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.185530 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.185534 | orchestrator | 2026-03-02 00:56:38.185538 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-02 00:56:38.185542 | orchestrator | Monday 02 March 2026 00:49:10 +0000 (0:00:00.989) 0:03:01.074 ********** 2026-03-02 00:56:38.185545 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.185549 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.185553 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.185556 | orchestrator | 2026-03-02 00:56:38.185560 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-02 00:56:38.185564 | orchestrator | Monday 02 March 2026 00:49:10 +0000 (0:00:00.337) 0:03:01.412 ********** 2026-03-02 00:56:38.185568 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.185571 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.185575 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.185579 | orchestrator | 2026-03-02 00:56:38.185583 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-02 00:56:38.185587 | orchestrator | Monday 02 March 2026 00:49:12 +0000 (0:00:01.450) 0:03:02.862 ********** 2026-03-02 00:56:38.185590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.185594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.185598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.185601 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185605 | orchestrator | 2026-03-02 00:56:38.185609 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-02 00:56:38.185613 | orchestrator | Monday 02 March 2026 00:49:12 +0000 (0:00:00.738) 0:03:03.600 ********** 2026-03-02 00:56:38.185617 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.185620 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.185624 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.185628 | orchestrator | 2026-03-02 00:56:38.185632 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-02 00:56:38.185635 | orchestrator | Monday 02 March 2026 00:49:13 +0000 (0:00:00.527) 0:03:04.128 ********** 2026-03-02 00:56:38.185639 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.185643 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.185649 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.185653 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.185657 | orchestrator | 2026-03-02 00:56:38.185661 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-02 00:56:38.185665 | orchestrator | Monday 02 March 2026 00:49:14 +0000 (0:00:00.862) 0:03:04.991 ********** 2026-03-02 00:56:38.185668 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.185672 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.185676 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.185680 | orchestrator | 2026-03-02 00:56:38.185683 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-02 00:56:38.185687 | orchestrator | Monday 02 March 2026 00:49:14 +0000 (0:00:00.708) 0:03:05.700 ********** 2026-03-02 00:56:38.185691 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.185695 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.185698 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.185702 | orchestrator | 2026-03-02 00:56:38.185706 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-02 00:56:38.185709 | orchestrator | Monday 02 March 2026 00:49:16 +0000 (0:00:01.206) 0:03:06.907 ********** 2026-03-02 00:56:38.185713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.185717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.185721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.185724 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185728 | orchestrator | 2026-03-02 00:56:38.185732 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-02 00:56:38.185736 | orchestrator | Monday 02 March 2026 00:49:16 +0000 (0:00:00.707) 0:03:07.614 ********** 2026-03-02 00:56:38.185740 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.185743 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.185747 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.185778 | orchestrator | 2026-03-02 00:56:38.185785 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-02 00:56:38.185791 | orchestrator | Monday 02 March 2026 00:49:17 +0000 (0:00:00.589) 0:03:08.204 ********** 2026-03-02 00:56:38.185797 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185803 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.185808 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.185815 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.185820 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.185840 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.185844 | orchestrator | 2026-03-02 00:56:38.185848 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-02 00:56:38.185852 | orchestrator | Monday 02 March 2026 00:49:18 +0000 (0:00:00.934) 0:03:09.138 ********** 2026-03-02 00:56:38.185856 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.185859 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.185866 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.185870 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.185874 | orchestrator | 2026-03-02 00:56:38.185877 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-02 00:56:38.185881 | orchestrator | Monday 02 March 2026 00:49:19 +0000 (0:00:00.814) 0:03:09.953 ********** 2026-03-02 00:56:38.185885 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.185889 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.185893 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.185896 | orchestrator | 2026-03-02 00:56:38.185900 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-02 00:56:38.185904 | orchestrator | Monday 02 March 2026 00:49:19 +0000 (0:00:00.400) 0:03:10.353 ********** 2026-03-02 00:56:38.185908 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.185917 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.185920 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.185924 | orchestrator | 2026-03-02 00:56:38.185928 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-02 00:56:38.185932 | orchestrator | Monday 02 March 2026 00:49:20 +0000 (0:00:01.270) 0:03:11.624 ********** 2026-03-02 00:56:38.185935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-02 00:56:38.185939 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-02 00:56:38.185943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-02 00:56:38.185946 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.185950 | orchestrator | 2026-03-02 00:56:38.185954 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-02 00:56:38.185958 | orchestrator | Monday 02 March 2026 00:49:21 +0000 (0:00:00.899) 0:03:12.523 ********** 2026-03-02 00:56:38.185962 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.185966 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.185969 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.185973 | orchestrator | 2026-03-02 00:56:38.185977 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-02 00:56:38.185981 | orchestrator | 2026-03-02 00:56:38.185984 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-02 00:56:38.185988 | orchestrator | Monday 02 March 2026 00:49:22 +0000 (0:00:00.507) 0:03:13.031 ********** 2026-03-02 00:56:38.185992 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.185996 | orchestrator | 2026-03-02 00:56:38.186000 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-02 00:56:38.186003 | orchestrator | Monday 02 March 2026 00:49:22 +0000 (0:00:00.616) 0:03:13.648 ********** 2026-03-02 00:56:38.186007 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.186011 | orchestrator | 2026-03-02 00:56:38.186036 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-02 00:56:38.186040 | orchestrator | Monday 02 March 2026 00:49:23 +0000 (0:00:00.476) 0:03:14.124 ********** 2026-03-02 00:56:38.186044 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186047 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186051 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186055 | orchestrator | 2026-03-02 00:56:38.186059 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-02 00:56:38.186062 | orchestrator | Monday 02 March 2026 00:49:24 +0000 (0:00:00.837) 0:03:14.962 ********** 2026-03-02 00:56:38.186066 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186070 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.186074 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.186081 | orchestrator | 2026-03-02 00:56:38.186089 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-02 00:56:38.186098 | orchestrator | Monday 02 March 2026 00:49:24 +0000 (0:00:00.260) 0:03:15.223 ********** 2026-03-02 00:56:38.186105 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186111 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.186117 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.186124 | orchestrator | 2026-03-02 00:56:38.186130 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-02 00:56:38.186136 | orchestrator | Monday 02 March 2026 00:49:24 +0000 (0:00:00.269) 0:03:15.492 ********** 2026-03-02 00:56:38.186143 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186149 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.186155 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.186161 | orchestrator | 2026-03-02 00:56:38.186167 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-02 00:56:38.186179 | orchestrator | Monday 02 March 2026 00:49:24 +0000 (0:00:00.235) 0:03:15.727 ********** 2026-03-02 00:56:38.186186 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186191 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186195 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186199 | orchestrator | 2026-03-02 00:56:38.186203 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-02 00:56:38.186206 | orchestrator | Monday 02 March 2026 00:49:25 +0000 (0:00:00.886) 0:03:16.614 ********** 2026-03-02 00:56:38.186210 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186214 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.186217 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.186221 | orchestrator | 2026-03-02 00:56:38.186225 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-02 00:56:38.186229 | orchestrator | Monday 02 March 2026 00:49:26 +0000 (0:00:00.313) 0:03:16.927 ********** 2026-03-02 00:56:38.186254 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186259 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.186263 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.186267 | orchestrator | 2026-03-02 00:56:38.186270 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-02 00:56:38.186274 | orchestrator | Monday 02 March 2026 00:49:26 +0000 (0:00:00.297) 0:03:17.225 ********** 2026-03-02 00:56:38.186278 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186285 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186289 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186293 | orchestrator | 2026-03-02 00:56:38.186297 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-02 00:56:38.186300 | orchestrator | Monday 02 March 2026 00:49:27 +0000 (0:00:00.709) 0:03:17.935 ********** 2026-03-02 00:56:38.186304 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186308 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186312 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186315 | orchestrator | 2026-03-02 00:56:38.186319 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-02 00:56:38.186323 | orchestrator | Monday 02 March 2026 00:49:28 +0000 (0:00:01.171) 0:03:19.106 ********** 2026-03-02 00:56:38.186326 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186330 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.186334 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.186337 | orchestrator | 2026-03-02 00:56:38.186341 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-02 00:56:38.186345 | orchestrator | Monday 02 March 2026 00:49:28 +0000 (0:00:00.286) 0:03:19.393 ********** 2026-03-02 00:56:38.186349 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186352 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186356 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186360 | orchestrator | 2026-03-02 00:56:38.186364 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-02 00:56:38.186367 | orchestrator | Monday 02 March 2026 00:49:28 +0000 (0:00:00.343) 0:03:19.736 ********** 2026-03-02 00:56:38.186371 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186375 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.186378 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.186382 | orchestrator | 2026-03-02 00:56:38.186386 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-02 00:56:38.186390 | orchestrator | Monday 02 March 2026 00:49:29 +0000 (0:00:00.276) 0:03:20.013 ********** 2026-03-02 00:56:38.186393 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186397 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.186401 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.186405 | orchestrator | 2026-03-02 00:56:38.186408 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-02 00:56:38.186412 | orchestrator | Monday 02 March 2026 00:49:29 +0000 (0:00:00.400) 0:03:20.413 ********** 2026-03-02 00:56:38.186419 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186423 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.186427 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.186430 | orchestrator | 2026-03-02 00:56:38.186434 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-02 00:56:38.186438 | orchestrator | Monday 02 March 2026 00:49:29 +0000 (0:00:00.303) 0:03:20.717 ********** 2026-03-02 00:56:38.186442 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186446 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.186449 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.186453 | orchestrator | 2026-03-02 00:56:38.186457 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-02 00:56:38.186461 | orchestrator | Monday 02 March 2026 00:49:30 +0000 (0:00:00.267) 0:03:20.984 ********** 2026-03-02 00:56:38.186465 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186468 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.186472 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.186476 | orchestrator | 2026-03-02 00:56:38.186479 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-02 00:56:38.186483 | orchestrator | Monday 02 March 2026 00:49:30 +0000 (0:00:00.256) 0:03:21.240 ********** 2026-03-02 00:56:38.186487 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186490 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186494 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186498 | orchestrator | 2026-03-02 00:56:38.186502 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-02 00:56:38.186505 | orchestrator | Monday 02 March 2026 00:49:30 +0000 (0:00:00.281) 0:03:21.522 ********** 2026-03-02 00:56:38.186509 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186513 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186516 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186520 | orchestrator | 2026-03-02 00:56:38.186524 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-02 00:56:38.186528 | orchestrator | Monday 02 March 2026 00:49:31 +0000 (0:00:00.499) 0:03:22.021 ********** 2026-03-02 00:56:38.186531 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186535 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186539 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186543 | orchestrator | 2026-03-02 00:56:38.186546 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-02 00:56:38.186550 | orchestrator | Monday 02 March 2026 00:49:31 +0000 (0:00:00.516) 0:03:22.538 ********** 2026-03-02 00:56:38.186554 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186558 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186562 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186570 | orchestrator | 2026-03-02 00:56:38.186576 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-02 00:56:38.186585 | orchestrator | Monday 02 March 2026 00:49:32 +0000 (0:00:00.305) 0:03:22.843 ********** 2026-03-02 00:56:38.186594 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.186599 | orchestrator | 2026-03-02 00:56:38.186605 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-02 00:56:38.186611 | orchestrator | Monday 02 March 2026 00:49:32 +0000 (0:00:00.682) 0:03:23.526 ********** 2026-03-02 00:56:38.186616 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.186622 | orchestrator | 2026-03-02 00:56:38.186648 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-02 00:56:38.186655 | orchestrator | Monday 02 March 2026 00:49:32 +0000 (0:00:00.145) 0:03:23.671 ********** 2026-03-02 00:56:38.186661 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-02 00:56:38.186666 | orchestrator | 2026-03-02 00:56:38.186672 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-02 00:56:38.186681 | orchestrator | Monday 02 March 2026 00:49:33 +0000 (0:00:00.932) 0:03:24.604 ********** 2026-03-02 00:56:38.186691 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186697 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186704 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186710 | orchestrator | 2026-03-02 00:56:38.186716 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-02 00:56:38.186721 | orchestrator | Monday 02 March 2026 00:49:34 +0000 (0:00:00.341) 0:03:24.946 ********** 2026-03-02 00:56:38.186727 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186732 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186737 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186743 | orchestrator | 2026-03-02 00:56:38.186748 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-02 00:56:38.186768 | orchestrator | Monday 02 March 2026 00:49:34 +0000 (0:00:00.441) 0:03:25.387 ********** 2026-03-02 00:56:38.186774 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.186779 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.186785 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.186791 | orchestrator | 2026-03-02 00:56:38.186796 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-02 00:56:38.186802 | orchestrator | Monday 02 March 2026 00:49:36 +0000 (0:00:01.902) 0:03:27.290 ********** 2026-03-02 00:56:38.186807 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.186813 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.186819 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.186826 | orchestrator | 2026-03-02 00:56:38.186832 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-02 00:56:38.186838 | orchestrator | Monday 02 March 2026 00:49:37 +0000 (0:00:00.703) 0:03:27.994 ********** 2026-03-02 00:56:38.186843 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.186850 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.186856 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.186861 | orchestrator | 2026-03-02 00:56:38.186867 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-02 00:56:38.186873 | orchestrator | Monday 02 March 2026 00:49:37 +0000 (0:00:00.590) 0:03:28.584 ********** 2026-03-02 00:56:38.186879 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186885 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.186891 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.186897 | orchestrator | 2026-03-02 00:56:38.186903 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-02 00:56:38.186906 | orchestrator | Monday 02 March 2026 00:49:38 +0000 (0:00:00.646) 0:03:29.230 ********** 2026-03-02 00:56:38.186910 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.186914 | orchestrator | 2026-03-02 00:56:38.186918 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-02 00:56:38.186921 | orchestrator | Monday 02 March 2026 00:49:40 +0000 (0:00:01.949) 0:03:31.180 ********** 2026-03-02 00:56:38.186925 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.186929 | orchestrator | 2026-03-02 00:56:38.186932 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-02 00:56:38.186936 | orchestrator | Monday 02 March 2026 00:49:41 +0000 (0:00:00.702) 0:03:31.883 ********** 2026-03-02 00:56:38.186940 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-02 00:56:38.186943 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.186947 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.186951 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-02 00:56:38.186955 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-02 00:56:38.186958 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-02 00:56:38.186962 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-02 00:56:38.186966 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-02 00:56:38.186975 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-02 00:56:38.186979 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-02 00:56:38.186983 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-02 00:56:38.186986 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-02 00:56:38.186990 | orchestrator | 2026-03-02 00:56:38.186994 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-02 00:56:38.186997 | orchestrator | Monday 02 March 2026 00:49:43 +0000 (0:00:02.591) 0:03:34.474 ********** 2026-03-02 00:56:38.187001 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.187005 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.187008 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.187012 | orchestrator | 2026-03-02 00:56:38.187016 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-02 00:56:38.187019 | orchestrator | Monday 02 March 2026 00:49:44 +0000 (0:00:00.991) 0:03:35.466 ********** 2026-03-02 00:56:38.187023 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.187027 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.187031 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.187034 | orchestrator | 2026-03-02 00:56:38.187038 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-02 00:56:38.187042 | orchestrator | Monday 02 March 2026 00:49:44 +0000 (0:00:00.289) 0:03:35.756 ********** 2026-03-02 00:56:38.187046 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.187049 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.187053 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.187057 | orchestrator | 2026-03-02 00:56:38.187060 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-02 00:56:38.187064 | orchestrator | Monday 02 March 2026 00:49:45 +0000 (0:00:00.472) 0:03:36.228 ********** 2026-03-02 00:56:38.187068 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.187094 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.187098 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.187102 | orchestrator | 2026-03-02 00:56:38.187106 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-02 00:56:38.187109 | orchestrator | Monday 02 March 2026 00:49:48 +0000 (0:00:02.598) 0:03:38.826 ********** 2026-03-02 00:56:38.187113 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.187121 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.187125 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.187128 | orchestrator | 2026-03-02 00:56:38.187132 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-02 00:56:38.187136 | orchestrator | Monday 02 March 2026 00:49:49 +0000 (0:00:01.314) 0:03:40.140 ********** 2026-03-02 00:56:38.187140 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187143 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.187147 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.187151 | orchestrator | 2026-03-02 00:56:38.187154 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-02 00:56:38.187158 | orchestrator | Monday 02 March 2026 00:49:49 +0000 (0:00:00.265) 0:03:40.406 ********** 2026-03-02 00:56:38.187162 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.187166 | orchestrator | 2026-03-02 00:56:38.187171 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-02 00:56:38.187178 | orchestrator | Monday 02 March 2026 00:49:50 +0000 (0:00:00.603) 0:03:41.009 ********** 2026-03-02 00:56:38.187184 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187189 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.187194 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.187200 | orchestrator | 2026-03-02 00:56:38.187209 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-02 00:56:38.187216 | orchestrator | Monday 02 March 2026 00:49:50 +0000 (0:00:00.271) 0:03:41.280 ********** 2026-03-02 00:56:38.187227 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187233 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.187240 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.187245 | orchestrator | 2026-03-02 00:56:38.187251 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-02 00:56:38.187258 | orchestrator | Monday 02 March 2026 00:49:50 +0000 (0:00:00.260) 0:03:41.540 ********** 2026-03-02 00:56:38.187264 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.187271 | orchestrator | 2026-03-02 00:56:38.187278 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-02 00:56:38.187285 | orchestrator | Monday 02 March 2026 00:49:51 +0000 (0:00:00.587) 0:03:42.128 ********** 2026-03-02 00:56:38.187291 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.187298 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.187304 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.187310 | orchestrator | 2026-03-02 00:56:38.187314 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-02 00:56:38.187318 | orchestrator | Monday 02 March 2026 00:49:52 +0000 (0:00:01.381) 0:03:43.510 ********** 2026-03-02 00:56:38.187322 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.187325 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.187329 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.187333 | orchestrator | 2026-03-02 00:56:38.187336 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-02 00:56:38.187340 | orchestrator | Monday 02 March 2026 00:49:53 +0000 (0:00:01.256) 0:03:44.766 ********** 2026-03-02 00:56:38.187343 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.187347 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.187351 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.187354 | orchestrator | 2026-03-02 00:56:38.187358 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-02 00:56:38.187362 | orchestrator | Monday 02 March 2026 00:49:55 +0000 (0:00:01.819) 0:03:46.586 ********** 2026-03-02 00:56:38.187366 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.187369 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.187373 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.187376 | orchestrator | 2026-03-02 00:56:38.187380 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-02 00:56:38.187384 | orchestrator | Monday 02 March 2026 00:49:58 +0000 (0:00:02.284) 0:03:48.870 ********** 2026-03-02 00:56:38.187388 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.187392 | orchestrator | 2026-03-02 00:56:38.187395 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-02 00:56:38.187399 | orchestrator | Monday 02 March 2026 00:49:58 +0000 (0:00:00.539) 0:03:49.409 ********** 2026-03-02 00:56:38.187403 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-02 00:56:38.187406 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.187410 | orchestrator | 2026-03-02 00:56:38.187414 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-02 00:56:38.187417 | orchestrator | Monday 02 March 2026 00:50:20 +0000 (0:00:22.069) 0:04:11.479 ********** 2026-03-02 00:56:38.187421 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.187425 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.187429 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.187432 | orchestrator | 2026-03-02 00:56:38.187436 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-02 00:56:38.187440 | orchestrator | Monday 02 March 2026 00:50:31 +0000 (0:00:11.084) 0:04:22.563 ********** 2026-03-02 00:56:38.187443 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187447 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.187451 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.187458 | orchestrator | 2026-03-02 00:56:38.187462 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-02 00:56:38.187483 | orchestrator | Monday 02 March 2026 00:50:32 +0000 (0:00:00.453) 0:04:23.017 ********** 2026-03-02 00:56:38.187492 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9a69fd8b24d03fb2546d3db196289661f3f1fef'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-02 00:56:38.187497 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9a69fd8b24d03fb2546d3db196289661f3f1fef'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-02 00:56:38.187501 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9a69fd8b24d03fb2546d3db196289661f3f1fef'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-02 00:56:38.187506 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9a69fd8b24d03fb2546d3db196289661f3f1fef'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-02 00:56:38.187510 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9a69fd8b24d03fb2546d3db196289661f3f1fef'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-02 00:56:38.187514 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b9a69fd8b24d03fb2546d3db196289661f3f1fef'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b9a69fd8b24d03fb2546d3db196289661f3f1fef'}])  2026-03-02 00:56:38.187519 | orchestrator | 2026-03-02 00:56:38.187523 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-02 00:56:38.187527 | orchestrator | Monday 02 March 2026 00:50:47 +0000 (0:00:14.810) 0:04:37.828 ********** 2026-03-02 00:56:38.187530 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187534 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.187538 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.187542 | orchestrator | 2026-03-02 00:56:38.187545 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-02 00:56:38.187549 | orchestrator | Monday 02 March 2026 00:50:47 +0000 (0:00:00.298) 0:04:38.127 ********** 2026-03-02 00:56:38.187553 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.187557 | orchestrator | 2026-03-02 00:56:38.187560 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-02 00:56:38.187564 | orchestrator | Monday 02 March 2026 00:50:48 +0000 (0:00:00.703) 0:04:38.831 ********** 2026-03-02 00:56:38.187568 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.187571 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.187575 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.187582 | orchestrator | 2026-03-02 00:56:38.187586 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-02 00:56:38.187590 | orchestrator | Monday 02 March 2026 00:50:48 +0000 (0:00:00.335) 0:04:39.167 ********** 2026-03-02 00:56:38.187593 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187599 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.187605 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.187613 | orchestrator | 2026-03-02 00:56:38.187624 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-02 00:56:38.187630 | orchestrator | Monday 02 March 2026 00:50:48 +0000 (0:00:00.343) 0:04:39.510 ********** 2026-03-02 00:56:38.187635 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-02 00:56:38.187640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-02 00:56:38.187645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-02 00:56:38.187651 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187657 | orchestrator | 2026-03-02 00:56:38.187663 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-02 00:56:38.187669 | orchestrator | Monday 02 March 2026 00:50:49 +0000 (0:00:00.859) 0:04:40.370 ********** 2026-03-02 00:56:38.187675 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.187680 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.187703 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.187710 | orchestrator | 2026-03-02 00:56:38.187715 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-02 00:56:38.187721 | orchestrator | 2026-03-02 00:56:38.187727 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-02 00:56:38.187733 | orchestrator | Monday 02 March 2026 00:50:50 +0000 (0:00:00.518) 0:04:40.888 ********** 2026-03-02 00:56:38.187742 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-02 00:56:38.187749 | orchestrator | 2026-03-02 00:56:38.187786 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-02 00:56:38.187793 | orchestrator | Monday 02 March 2026 00:50:50 +0000 (0:00:00.502) 0:04:41.391 ********** 2026-03-02 00:56:38.187799 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.187806 | orchestrator | 2026-03-02 00:56:38.187810 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-02 00:56:38.187814 | orchestrator | Monday 02 March 2026 00:50:51 +0000 (0:00:00.711) 0:04:42.102 ********** 2026-03-02 00:56:38.187818 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.187821 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.187825 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.187829 | orchestrator | 2026-03-02 00:56:38.187832 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-02 00:56:38.187836 | orchestrator | Monday 02 March 2026 00:50:52 +0000 (0:00:00.793) 0:04:42.895 ********** 2026-03-02 00:56:38.187840 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187844 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.187847 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.187851 | orchestrator | 2026-03-02 00:56:38.187855 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-02 00:56:38.187858 | orchestrator | Monday 02 March 2026 00:50:52 +0000 (0:00:00.301) 0:04:43.197 ********** 2026-03-02 00:56:38.187862 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187866 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.187869 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.187873 | orchestrator | 2026-03-02 00:56:38.187877 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-02 00:56:38.187880 | orchestrator | Monday 02 March 2026 00:50:52 +0000 (0:00:00.570) 0:04:43.768 ********** 2026-03-02 00:56:38.187884 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187892 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.187896 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.187899 | orchestrator | 2026-03-02 00:56:38.187903 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-02 00:56:38.187907 | orchestrator | Monday 02 March 2026 00:50:53 +0000 (0:00:00.303) 0:04:44.072 ********** 2026-03-02 00:56:38.187910 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.187914 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.187918 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.187921 | orchestrator | 2026-03-02 00:56:38.187925 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-02 00:56:38.187929 | orchestrator | Monday 02 March 2026 00:50:54 +0000 (0:00:00.746) 0:04:44.819 ********** 2026-03-02 00:56:38.187932 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187936 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.187940 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.187944 | orchestrator | 2026-03-02 00:56:38.187950 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-02 00:56:38.187957 | orchestrator | Monday 02 March 2026 00:50:54 +0000 (0:00:00.348) 0:04:45.167 ********** 2026-03-02 00:56:38.187966 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.187972 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.187978 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.187983 | orchestrator | 2026-03-02 00:56:38.187989 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-02 00:56:38.187995 | orchestrator | Monday 02 March 2026 00:50:55 +0000 (0:00:00.738) 0:04:45.905 ********** 2026-03-02 00:56:38.188001 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.188007 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.188013 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.188019 | orchestrator | 2026-03-02 00:56:38.188025 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-02 00:56:38.188033 | orchestrator | Monday 02 March 2026 00:50:55 +0000 (0:00:00.725) 0:04:46.630 ********** 2026-03-02 00:56:38.188037 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.188040 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.188044 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.188048 | orchestrator | 2026-03-02 00:56:38.188051 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-02 00:56:38.188055 | orchestrator | Monday 02 March 2026 00:50:56 +0000 (0:00:00.719) 0:04:47.350 ********** 2026-03-02 00:56:38.188059 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.188062 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.188066 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.188070 | orchestrator | 2026-03-02 00:56:38.188073 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-02 00:56:38.188077 | orchestrator | Monday 02 March 2026 00:50:56 +0000 (0:00:00.307) 0:04:47.658 ********** 2026-03-02 00:56:38.188081 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.188085 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.188088 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.188092 | orchestrator | 2026-03-02 00:56:38.188096 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-02 00:56:38.188099 | orchestrator | Monday 02 March 2026 00:50:57 +0000 (0:00:00.582) 0:04:48.240 ********** 2026-03-02 00:56:38.188103 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.188107 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.188110 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.188114 | orchestrator | 2026-03-02 00:56:38.188118 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-02 00:56:38.188140 | orchestrator | Monday 02 March 2026 00:50:57 +0000 (0:00:00.304) 0:04:48.545 ********** 2026-03-02 00:56:38.188145 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.188148 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.188152 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.188160 | orchestrator | 2026-03-02 00:56:38.188164 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-02 00:56:38.188168 | orchestrator | Monday 02 March 2026 00:50:58 +0000 (0:00:00.309) 0:04:48.855 ********** 2026-03-02 00:56:38.188180 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.188186 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.188192 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.188198 | orchestrator | 2026-03-02 00:56:38.188203 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-02 00:56:38.188209 | orchestrator | Monday 02 March 2026 00:50:58 +0000 (0:00:00.306) 0:04:49.161 ********** 2026-03-02 00:56:38.188214 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.188220 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.188233 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.188247 | orchestrator | 2026-03-02 00:56:38.188260 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-02 00:56:38.188275 | orchestrator | Monday 02 March 2026 00:50:58 +0000 (0:00:00.287) 0:04:49.448 ********** 2026-03-02 00:56:38.188289 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.188300 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.188310 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.188322 | orchestrator | 2026-03-02 00:56:38.188336 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-02 00:56:38.188351 | orchestrator | Monday 02 March 2026 00:50:59 +0000 (0:00:00.533) 0:04:49.982 ********** 2026-03-02 00:56:38.188362 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.188371 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.188382 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.188390 | orchestrator | 2026-03-02 00:56:38.188401 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-02 00:56:38.188410 | orchestrator | Monday 02 March 2026 00:50:59 +0000 (0:00:00.310) 0:04:50.292 ********** 2026-03-02 00:56:38.188419 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.188428 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.188438 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.188448 | orchestrator | 2026-03-02 00:56:38.188457 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-02 00:56:38.188467 | orchestrator | Monday 02 March 2026 00:50:59 +0000 (0:00:00.299) 0:04:50.591 ********** 2026-03-02 00:56:38.188477 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.188487 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.188496 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.188506 | orchestrator | 2026-03-02 00:56:38.188515 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-02 00:56:38.188525 | orchestrator | Monday 02 March 2026 00:51:00 +0000 (0:00:00.873) 0:04:51.465 ********** 2026-03-02 00:56:38.188535 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-02 00:56:38.188545 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-02 00:56:38.188555 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-02 00:56:38.188569 | orchestrator | 2026-03-02 00:56:38.188584 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-02 00:56:38.188599 | orchestrator | Monday 02 March 2026 00:51:01 +0000 (0:00:00.649) 0:04:52.114 ********** 2026-03-02 00:56:38.188612 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.188629 | orchestrator | 2026-03-02 00:56:38.188642 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-02 00:56:38.188648 | orchestrator | Monday 02 March 2026 00:51:01 +0000 (0:00:00.532) 0:04:52.647 ********** 2026-03-02 00:56:38.188654 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.188660 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.188667 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.188681 | orchestrator | 2026-03-02 00:56:38.188687 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-02 00:56:38.188693 | orchestrator | Monday 02 March 2026 00:51:02 +0000 (0:00:00.781) 0:04:53.428 ********** 2026-03-02 00:56:38.188700 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.188706 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.188712 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.188716 | orchestrator | 2026-03-02 00:56:38.188719 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-02 00:56:38.188723 | orchestrator | Monday 02 March 2026 00:51:03 +0000 (0:00:00.609) 0:04:54.037 ********** 2026-03-02 00:56:38.188727 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-02 00:56:38.188731 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-02 00:56:38.188735 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-02 00:56:38.188738 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-02 00:56:38.188742 | orchestrator | 2026-03-02 00:56:38.188746 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-02 00:56:38.188763 | orchestrator | Monday 02 March 2026 00:51:14 +0000 (0:00:11.492) 0:05:05.530 ********** 2026-03-02 00:56:38.188775 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.188781 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.188786 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.188792 | orchestrator | 2026-03-02 00:56:38.188798 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-02 00:56:38.188804 | orchestrator | Monday 02 March 2026 00:51:15 +0000 (0:00:00.325) 0:05:05.856 ********** 2026-03-02 00:56:38.188809 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-02 00:56:38.188814 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-02 00:56:38.188820 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-02 00:56:38.188825 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-02 00:56:38.188830 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.188878 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.188885 | orchestrator | 2026-03-02 00:56:38.188890 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-02 00:56:38.188896 | orchestrator | Monday 02 March 2026 00:51:17 +0000 (0:00:02.419) 0:05:08.276 ********** 2026-03-02 00:56:38.188902 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-02 00:56:38.188913 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-02 00:56:38.188921 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-02 00:56:38.188926 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-02 00:56:38.188932 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-02 00:56:38.188938 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-02 00:56:38.188944 | orchestrator | 2026-03-02 00:56:38.188950 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-02 00:56:38.188956 | orchestrator | Monday 02 March 2026 00:51:18 +0000 (0:00:01.290) 0:05:09.566 ********** 2026-03-02 00:56:38.188961 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.188968 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.188974 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.188980 | orchestrator | 2026-03-02 00:56:38.188986 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-02 00:56:38.188993 | orchestrator | Monday 02 March 2026 00:51:19 +0000 (0:00:00.987) 0:05:10.553 ********** 2026-03-02 00:56:38.188997 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.189001 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.189005 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.189009 | orchestrator | 2026-03-02 00:56:38.189012 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-02 00:56:38.189016 | orchestrator | Monday 02 March 2026 00:51:20 +0000 (0:00:00.304) 0:05:10.858 ********** 2026-03-02 00:56:38.189024 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.189028 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.189032 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.189036 | orchestrator | 2026-03-02 00:56:38.189039 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-02 00:56:38.189043 | orchestrator | Monday 02 March 2026 00:51:20 +0000 (0:00:00.315) 0:05:11.174 ********** 2026-03-02 00:56:38.189047 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.189050 | orchestrator | 2026-03-02 00:56:38.189054 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-02 00:56:38.189058 | orchestrator | Monday 02 March 2026 00:51:21 +0000 (0:00:00.740) 0:05:11.915 ********** 2026-03-02 00:56:38.189061 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.189065 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.189069 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.189072 | orchestrator | 2026-03-02 00:56:38.189076 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-02 00:56:38.189080 | orchestrator | Monday 02 March 2026 00:51:21 +0000 (0:00:00.277) 0:05:12.193 ********** 2026-03-02 00:56:38.189084 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.189087 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.189091 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.189094 | orchestrator | 2026-03-02 00:56:38.189098 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-02 00:56:38.189102 | orchestrator | Monday 02 March 2026 00:51:21 +0000 (0:00:00.302) 0:05:12.495 ********** 2026-03-02 00:56:38.189105 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.189109 | orchestrator | 2026-03-02 00:56:38.189113 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-02 00:56:38.189117 | orchestrator | Monday 02 March 2026 00:51:22 +0000 (0:00:00.687) 0:05:13.183 ********** 2026-03-02 00:56:38.189120 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.189124 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.189128 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.189131 | orchestrator | 2026-03-02 00:56:38.189135 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-02 00:56:38.189139 | orchestrator | Monday 02 March 2026 00:51:23 +0000 (0:00:01.397) 0:05:14.580 ********** 2026-03-02 00:56:38.189142 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.189146 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.189150 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.189153 | orchestrator | 2026-03-02 00:56:38.189157 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-02 00:56:38.189160 | orchestrator | Monday 02 March 2026 00:51:24 +0000 (0:00:01.155) 0:05:15.736 ********** 2026-03-02 00:56:38.189164 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.189168 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.189171 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.189175 | orchestrator | 2026-03-02 00:56:38.189179 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-02 00:56:38.189182 | orchestrator | Monday 02 March 2026 00:51:27 +0000 (0:00:03.004) 0:05:18.741 ********** 2026-03-02 00:56:38.189186 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.189190 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.189193 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.189197 | orchestrator | 2026-03-02 00:56:38.189201 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-02 00:56:38.189204 | orchestrator | Monday 02 March 2026 00:51:30 +0000 (0:00:02.258) 0:05:20.999 ********** 2026-03-02 00:56:38.189208 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.189212 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.189218 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-02 00:56:38.189222 | orchestrator | 2026-03-02 00:56:38.189225 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-02 00:56:38.189229 | orchestrator | Monday 02 March 2026 00:51:30 +0000 (0:00:00.366) 0:05:21.365 ********** 2026-03-02 00:56:38.189248 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-02 00:56:38.189252 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-02 00:56:38.189256 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-02 00:56:38.189262 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-02 00:56:38.189266 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-02 00:56:38.189270 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-02 00:56:38.189274 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-02 00:56:38.189278 | orchestrator | 2026-03-02 00:56:38.189281 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-02 00:56:38.189285 | orchestrator | Monday 02 March 2026 00:52:06 +0000 (0:00:36.312) 0:05:57.678 ********** 2026-03-02 00:56:38.189289 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-02 00:56:38.189293 | orchestrator | 2026-03-02 00:56:38.189296 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-02 00:56:38.189300 | orchestrator | Monday 02 March 2026 00:52:08 +0000 (0:00:01.334) 0:05:59.013 ********** 2026-03-02 00:56:38.189304 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.189307 | orchestrator | 2026-03-02 00:56:38.189313 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-02 00:56:38.189319 | orchestrator | Monday 02 March 2026 00:52:08 +0000 (0:00:00.302) 0:05:59.315 ********** 2026-03-02 00:56:38.189329 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.189335 | orchestrator | 2026-03-02 00:56:38.189341 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-02 00:56:38.189347 | orchestrator | Monday 02 March 2026 00:52:08 +0000 (0:00:00.127) 0:05:59.442 ********** 2026-03-02 00:56:38.189354 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-02 00:56:38.189359 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-02 00:56:38.189362 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-02 00:56:38.189366 | orchestrator | 2026-03-02 00:56:38.189370 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-02 00:56:38.189373 | orchestrator | Monday 02 March 2026 00:52:15 +0000 (0:00:06.686) 0:06:06.129 ********** 2026-03-02 00:56:38.189377 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-02 00:56:38.189381 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-02 00:56:38.189385 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-02 00:56:38.189388 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-02 00:56:38.189392 | orchestrator | 2026-03-02 00:56:38.189396 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-02 00:56:38.189399 | orchestrator | Monday 02 March 2026 00:52:20 +0000 (0:00:05.261) 0:06:11.390 ********** 2026-03-02 00:56:38.189403 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.189407 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.189410 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.189414 | orchestrator | 2026-03-02 00:56:38.189418 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-02 00:56:38.189425 | orchestrator | Monday 02 March 2026 00:52:21 +0000 (0:00:00.641) 0:06:12.031 ********** 2026-03-02 00:56:38.189429 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.189433 | orchestrator | 2026-03-02 00:56:38.189436 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-02 00:56:38.189440 | orchestrator | Monday 02 March 2026 00:52:21 +0000 (0:00:00.739) 0:06:12.771 ********** 2026-03-02 00:56:38.189444 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.189447 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.189451 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.189455 | orchestrator | 2026-03-02 00:56:38.189458 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-02 00:56:38.189462 | orchestrator | Monday 02 March 2026 00:52:22 +0000 (0:00:00.303) 0:06:13.074 ********** 2026-03-02 00:56:38.189466 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.189469 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.189473 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.189477 | orchestrator | 2026-03-02 00:56:38.189480 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-02 00:56:38.189484 | orchestrator | Monday 02 March 2026 00:52:23 +0000 (0:00:01.298) 0:06:14.373 ********** 2026-03-02 00:56:38.189488 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-02 00:56:38.189492 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-02 00:56:38.189495 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-02 00:56:38.189499 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.189503 | orchestrator | 2026-03-02 00:56:38.189506 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-02 00:56:38.189510 | orchestrator | Monday 02 March 2026 00:52:24 +0000 (0:00:00.863) 0:06:15.236 ********** 2026-03-02 00:56:38.189514 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.189517 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.189521 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.189525 | orchestrator | 2026-03-02 00:56:38.189529 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-02 00:56:38.189532 | orchestrator | 2026-03-02 00:56:38.189536 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-02 00:56:38.189554 | orchestrator | Monday 02 March 2026 00:52:25 +0000 (0:00:00.760) 0:06:15.996 ********** 2026-03-02 00:56:38.189559 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.189563 | orchestrator | 2026-03-02 00:56:38.189566 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-02 00:56:38.189572 | orchestrator | Monday 02 March 2026 00:52:25 +0000 (0:00:00.500) 0:06:16.497 ********** 2026-03-02 00:56:38.189576 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-03-02 00:56:38.189580 | orchestrator | 2026-03-02 00:56:38.189584 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-02 00:56:38.189588 | orchestrator | Monday 02 March 2026 00:52:26 +0000 (0:00:00.695) 0:06:17.193 ********** 2026-03-02 00:56:38.189591 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.189595 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.189599 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.189602 | orchestrator | 2026-03-02 00:56:38.189606 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-02 00:56:38.189610 | orchestrator | Monday 02 March 2026 00:52:26 +0000 (0:00:00.302) 0:06:17.496 ********** 2026-03-02 00:56:38.189613 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.189617 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.189621 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.189625 | orchestrator | 2026-03-02 00:56:38.189631 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-02 00:56:38.189635 | orchestrator | Monday 02 March 2026 00:52:27 +0000 (0:00:00.731) 0:06:18.227 ********** 2026-03-02 00:56:38.189638 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.189642 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.189646 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.189649 | orchestrator | 2026-03-02 00:56:38.189653 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-02 00:56:38.189657 | orchestrator | Monday 02 March 2026 00:52:28 +0000 (0:00:00.706) 0:06:18.934 ********** 2026-03-02 00:56:38.189661 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.189664 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.189668 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.189672 | orchestrator | 2026-03-02 00:56:38.189675 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-02 00:56:38.189679 | orchestrator | Monday 02 March 2026 00:52:29 +0000 (0:00:00.852) 0:06:19.787 ********** 2026-03-02 00:56:38.189683 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.189687 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.189690 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.189694 | orchestrator | 2026-03-02 00:56:38.189698 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-02 00:56:38.189701 | orchestrator | Monday 02 March 2026 00:52:29 +0000 (0:00:00.279) 0:06:20.066 ********** 2026-03-02 00:56:38.189705 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.189709 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.189713 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.189716 | orchestrator | 2026-03-02 00:56:38.189720 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-02 00:56:38.189724 | orchestrator | Monday 02 March 2026 00:52:29 +0000 (0:00:00.294) 0:06:20.360 ********** 2026-03-02 00:56:38.189727 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.189731 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.189735 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.189739 | orchestrator | 2026-03-02 00:56:38.189742 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-02 00:56:38.189746 | orchestrator | Monday 02 March 2026 00:52:29 +0000 (0:00:00.273) 0:06:20.633 ********** 2026-03-02 00:56:38.189750 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.189768 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.189774 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.189779 | orchestrator | 2026-03-02 00:56:38.189785 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-02 00:56:38.189792 | orchestrator | Monday 02 March 2026 00:52:30 +0000 (0:00:00.932) 0:06:21.566 ********** 2026-03-02 00:56:38.189796 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.189799 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.189803 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.189807 | orchestrator | 2026-03-02 00:56:38.189810 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-02 00:56:38.189814 | orchestrator | Monday 02 March 2026 00:52:31 +0000 (0:00:00.734) 0:06:22.301 ********** 2026-03-02 00:56:38.189818 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.189821 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.189825 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.189829 | orchestrator | 2026-03-02 00:56:38.189832 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-02 00:56:38.189836 | orchestrator | Monday 02 March 2026 00:52:31 +0000 (0:00:00.287) 0:06:22.589 ********** 2026-03-02 00:56:38.189840 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.189843 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.189847 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.189851 | orchestrator | 2026-03-02 00:56:38.189854 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-02 00:56:38.189861 | orchestrator | Monday 02 March 2026 00:52:32 +0000 (0:00:00.285) 0:06:22.875 ********** 2026-03-02 00:56:38.189864 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.189868 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.189872 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.189875 | orchestrator | 2026-03-02 00:56:38.189879 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-02 00:56:38.189883 | orchestrator | Monday 02 March 2026 00:52:32 +0000 (0:00:00.543) 0:06:23.418 ********** 2026-03-02 00:56:38.189886 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.189890 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.189894 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.189897 | orchestrator | 2026-03-02 00:56:38.189901 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-02 00:56:38.189907 | orchestrator | Monday 02 March 2026 00:52:32 +0000 (0:00:00.304) 0:06:23.722 ********** 2026-03-02 00:56:38.189911 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.189914 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.189918 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.189922 | orchestrator | 2026-03-02 00:56:38.189925 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-02 00:56:38.189931 | orchestrator | Monday 02 March 2026 00:52:33 +0000 (0:00:00.297) 0:06:24.020 ********** 2026-03-02 00:56:38.189935 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.189939 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.189943 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.189946 | orchestrator | 2026-03-02 00:56:38.189950 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-02 00:56:38.189954 | orchestrator | Monday 02 March 2026 00:52:33 +0000 (0:00:00.260) 0:06:24.280 ********** 2026-03-02 00:56:38.189957 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.189961 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.189965 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.189968 | orchestrator | 2026-03-02 00:56:38.189972 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-02 00:56:38.189976 | orchestrator | Monday 02 March 2026 00:52:33 +0000 (0:00:00.424) 0:06:24.704 ********** 2026-03-02 00:56:38.189979 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.189983 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.189987 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.189990 | orchestrator | 2026-03-02 00:56:38.189994 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-02 00:56:38.189998 | orchestrator | Monday 02 March 2026 00:52:34 +0000 (0:00:00.281) 0:06:24.986 ********** 2026-03-02 00:56:38.190001 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.190005 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.190009 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.190042 | orchestrator | 2026-03-02 00:56:38.190049 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-02 00:56:38.190056 | orchestrator | Monday 02 March 2026 00:52:34 +0000 (0:00:00.337) 0:06:25.323 ********** 2026-03-02 00:56:38.190062 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.190068 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.190074 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.190079 | orchestrator | 2026-03-02 00:56:38.190085 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-02 00:56:38.190092 | orchestrator | Monday 02 March 2026 00:52:35 +0000 (0:00:00.760) 0:06:26.084 ********** 2026-03-02 00:56:38.190098 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.190104 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.190109 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.190115 | orchestrator | 2026-03-02 00:56:38.190120 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-02 00:56:38.190126 | orchestrator | Monday 02 March 2026 00:52:35 +0000 (0:00:00.358) 0:06:26.443 ********** 2026-03-02 00:56:38.190131 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-02 00:56:38.190141 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-02 00:56:38.190147 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-02 00:56:38.190152 | orchestrator | 2026-03-02 00:56:38.190158 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-02 00:56:38.190164 | orchestrator | Monday 02 March 2026 00:52:36 +0000 (0:00:00.680) 0:06:27.123 ********** 2026-03-02 00:56:38.190170 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.190176 | orchestrator | 2026-03-02 00:56:38.190183 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-02 00:56:38.190189 | orchestrator | Monday 02 March 2026 00:52:36 +0000 (0:00:00.585) 0:06:27.709 ********** 2026-03-02 00:56:38.190195 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.190200 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.190206 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.190213 | orchestrator | 2026-03-02 00:56:38.190219 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-02 00:56:38.190225 | orchestrator | Monday 02 March 2026 00:52:37 +0000 (0:00:00.759) 0:06:28.469 ********** 2026-03-02 00:56:38.190230 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.190234 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.190238 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.190242 | orchestrator | 2026-03-02 00:56:38.190245 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-02 00:56:38.190249 | orchestrator | Monday 02 March 2026 00:52:38 +0000 (0:00:00.381) 0:06:28.850 ********** 2026-03-02 00:56:38.190253 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.190257 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.190260 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.190264 | orchestrator | 2026-03-02 00:56:38.190268 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-02 00:56:38.190271 | orchestrator | Monday 02 March 2026 00:52:38 +0000 (0:00:00.779) 0:06:29.630 ********** 2026-03-02 00:56:38.190275 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.190279 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.190282 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.190286 | orchestrator | 2026-03-02 00:56:38.190290 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-02 00:56:38.190293 | orchestrator | Monday 02 March 2026 00:52:39 +0000 (0:00:00.344) 0:06:29.974 ********** 2026-03-02 00:56:38.190297 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-02 00:56:38.190301 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-02 00:56:38.190305 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-02 00:56:38.190309 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-02 00:56:38.190317 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-02 00:56:38.190321 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-02 00:56:38.190325 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-02 00:56:38.190332 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-02 00:56:38.190336 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-02 00:56:38.190340 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-02 00:56:38.190344 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-02 00:56:38.190350 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-02 00:56:38.190354 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-02 00:56:38.190358 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-02 00:56:38.190362 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-02 00:56:38.190365 | orchestrator | 2026-03-02 00:56:38.190369 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-02 00:56:38.190373 | orchestrator | Monday 02 March 2026 00:52:43 +0000 (0:00:03.828) 0:06:33.802 ********** 2026-03-02 00:56:38.190376 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.190380 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.190384 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.190388 | orchestrator | 2026-03-02 00:56:38.190391 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-02 00:56:38.190395 | orchestrator | Monday 02 March 2026 00:52:43 +0000 (0:00:00.408) 0:06:34.211 ********** 2026-03-02 00:56:38.190399 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.190402 | orchestrator | 2026-03-02 00:56:38.190406 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-02 00:56:38.190410 | orchestrator | Monday 02 March 2026 00:52:44 +0000 (0:00:00.569) 0:06:34.780 ********** 2026-03-02 00:56:38.190414 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-02 00:56:38.190417 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-02 00:56:38.190421 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-02 00:56:38.190425 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-02 00:56:38.190429 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-02 00:56:38.190432 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-02 00:56:38.190436 | orchestrator | 2026-03-02 00:56:38.190440 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-02 00:56:38.190443 | orchestrator | Monday 02 March 2026 00:52:45 +0000 (0:00:01.333) 0:06:36.114 ********** 2026-03-02 00:56:38.190447 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.190451 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-02 00:56:38.190455 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-02 00:56:38.190458 | orchestrator | 2026-03-02 00:56:38.190462 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-02 00:56:38.190466 | orchestrator | Monday 02 March 2026 00:52:47 +0000 (0:00:02.093) 0:06:38.208 ********** 2026-03-02 00:56:38.190470 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-02 00:56:38.190473 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-02 00:56:38.190477 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.190481 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-02 00:56:38.190485 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-02 00:56:38.190488 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.190492 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-02 00:56:38.190496 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-02 00:56:38.190500 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.190503 | orchestrator | 2026-03-02 00:56:38.190507 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-02 00:56:38.190511 | orchestrator | Monday 02 March 2026 00:52:48 +0000 (0:00:01.150) 0:06:39.359 ********** 2026-03-02 00:56:38.190515 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-02 00:56:38.190518 | orchestrator | 2026-03-02 00:56:38.190522 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-02 00:56:38.190528 | orchestrator | Monday 02 March 2026 00:52:50 +0000 (0:00:02.249) 0:06:41.609 ********** 2026-03-02 00:56:38.190532 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.190536 | orchestrator | 2026-03-02 00:56:38.190540 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-02 00:56:38.190543 | orchestrator | Monday 02 March 2026 00:52:51 +0000 (0:00:00.968) 0:06:42.577 ********** 2026-03-02 00:56:38.190547 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c1d64d47-37ed-5019-b7d5-718691437d08', 'data_vg': 'ceph-c1d64d47-37ed-5019-b7d5-718691437d08'}) 2026-03-02 00:56:38.190552 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-271875e3-8908-5e0e-b413-64afee9519da', 'data_vg': 'ceph-271875e3-8908-5e0e-b413-64afee9519da'}) 2026-03-02 00:56:38.190558 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de3a51bd-019b-527a-8dea-ff4c94e5d801', 'data_vg': 'ceph-de3a51bd-019b-527a-8dea-ff4c94e5d801'}) 2026-03-02 00:56:38.190566 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3d7235d6-f117-525f-ba2d-9ab371851486', 'data_vg': 'ceph-3d7235d6-f117-525f-ba2d-9ab371851486'}) 2026-03-02 00:56:38.190570 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-52125f52-6af3-5290-9fed-9584660c39a2', 'data_vg': 'ceph-52125f52-6af3-5290-9fed-9584660c39a2'}) 2026-03-02 00:56:38.190573 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8a84d633-ba5b-5049-b6da-2482ee8b3083', 'data_vg': 'ceph-8a84d633-ba5b-5049-b6da-2482ee8b3083'}) 2026-03-02 00:56:38.190577 | orchestrator | 2026-03-02 00:56:38.190581 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-02 00:56:38.190585 | orchestrator | Monday 02 March 2026 00:53:30 +0000 (0:00:39.172) 0:07:21.749 ********** 2026-03-02 00:56:38.190588 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.190592 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.190596 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.190599 | orchestrator | 2026-03-02 00:56:38.190603 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-02 00:56:38.190607 | orchestrator | Monday 02 March 2026 00:53:31 +0000 (0:00:00.299) 0:07:22.048 ********** 2026-03-02 00:56:38.190610 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.190614 | orchestrator | 2026-03-02 00:56:38.190618 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-02 00:56:38.190621 | orchestrator | Monday 02 March 2026 00:53:32 +0000 (0:00:00.732) 0:07:22.781 ********** 2026-03-02 00:56:38.190625 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.190629 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.190632 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.190636 | orchestrator | 2026-03-02 00:56:38.190640 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-02 00:56:38.190644 | orchestrator | Monday 02 March 2026 00:53:32 +0000 (0:00:00.635) 0:07:23.416 ********** 2026-03-02 00:56:38.190647 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.190651 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.190655 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.190658 | orchestrator | 2026-03-02 00:56:38.190662 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-02 00:56:38.190666 | orchestrator | Monday 02 March 2026 00:53:34 +0000 (0:00:02.302) 0:07:25.719 ********** 2026-03-02 00:56:38.190669 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.190673 | orchestrator | 2026-03-02 00:56:38.190677 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-02 00:56:38.190681 | orchestrator | Monday 02 March 2026 00:53:35 +0000 (0:00:00.795) 0:07:26.515 ********** 2026-03-02 00:56:38.190684 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.190691 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.190694 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.190698 | orchestrator | 2026-03-02 00:56:38.190702 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-02 00:56:38.190706 | orchestrator | Monday 02 March 2026 00:53:36 +0000 (0:00:01.126) 0:07:27.642 ********** 2026-03-02 00:56:38.190709 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.190713 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.190717 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.190720 | orchestrator | 2026-03-02 00:56:38.190724 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-02 00:56:38.190728 | orchestrator | Monday 02 March 2026 00:53:37 +0000 (0:00:00.989) 0:07:28.631 ********** 2026-03-02 00:56:38.190731 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.190735 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.190739 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.190742 | orchestrator | 2026-03-02 00:56:38.190746 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-02 00:56:38.190750 | orchestrator | Monday 02 March 2026 00:53:39 +0000 (0:00:01.812) 0:07:30.444 ********** 2026-03-02 00:56:38.190775 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.190779 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.190783 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.190786 | orchestrator | 2026-03-02 00:56:38.190790 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-02 00:56:38.190794 | orchestrator | Monday 02 March 2026 00:53:40 +0000 (0:00:00.567) 0:07:31.012 ********** 2026-03-02 00:56:38.190798 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.190801 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.190805 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.190808 | orchestrator | 2026-03-02 00:56:38.190812 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-02 00:56:38.190816 | orchestrator | Monday 02 March 2026 00:53:40 +0000 (0:00:00.346) 0:07:31.358 ********** 2026-03-02 00:56:38.190820 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-02 00:56:38.190823 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-02 00:56:38.190827 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-02 00:56:38.190831 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-02 00:56:38.190834 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-02 00:56:38.190838 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-02 00:56:38.190842 | orchestrator | 2026-03-02 00:56:38.190845 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-02 00:56:38.190849 | orchestrator | Monday 02 March 2026 00:53:41 +0000 (0:00:00.988) 0:07:32.346 ********** 2026-03-02 00:56:38.190853 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-02 00:56:38.190857 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-02 00:56:38.190863 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-02 00:56:38.190867 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-02 00:56:38.190871 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-02 00:56:38.190874 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-02 00:56:38.190878 | orchestrator | 2026-03-02 00:56:38.190882 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-02 00:56:38.190889 | orchestrator | Monday 02 March 2026 00:53:43 +0000 (0:00:01.899) 0:07:34.246 ********** 2026-03-02 00:56:38.190893 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-02 00:56:38.190897 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-02 00:56:38.190900 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-02 00:56:38.190904 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-02 00:56:38.190908 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-02 00:56:38.190911 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-02 00:56:38.190915 | orchestrator | 2026-03-02 00:56:38.190922 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-02 00:56:38.190926 | orchestrator | Monday 02 March 2026 00:53:46 +0000 (0:00:03.489) 0:07:37.736 ********** 2026-03-02 00:56:38.190929 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.190933 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.190937 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-02 00:56:38.190940 | orchestrator | 2026-03-02 00:56:38.190944 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-02 00:56:38.190948 | orchestrator | Monday 02 March 2026 00:53:49 +0000 (0:00:02.173) 0:07:39.909 ********** 2026-03-02 00:56:38.190952 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.190955 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.190959 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-02 00:56:38.190963 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-02 00:56:38.190966 | orchestrator | 2026-03-02 00:56:38.190970 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-02 00:56:38.190974 | orchestrator | Monday 02 March 2026 00:54:01 +0000 (0:00:12.168) 0:07:52.078 ********** 2026-03-02 00:56:38.190977 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.190981 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.190985 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.190989 | orchestrator | 2026-03-02 00:56:38.190992 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-02 00:56:38.190996 | orchestrator | Monday 02 March 2026 00:54:02 +0000 (0:00:00.876) 0:07:52.955 ********** 2026-03-02 00:56:38.191000 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191003 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191007 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191011 | orchestrator | 2026-03-02 00:56:38.191015 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-02 00:56:38.191018 | orchestrator | Monday 02 March 2026 00:54:02 +0000 (0:00:00.291) 0:07:53.246 ********** 2026-03-02 00:56:38.191022 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.191026 | orchestrator | 2026-03-02 00:56:38.191030 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-02 00:56:38.191033 | orchestrator | Monday 02 March 2026 00:54:03 +0000 (0:00:00.549) 0:07:53.796 ********** 2026-03-02 00:56:38.191037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.191041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.191044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.191048 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191052 | orchestrator | 2026-03-02 00:56:38.191056 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-02 00:56:38.191059 | orchestrator | Monday 02 March 2026 00:54:03 +0000 (0:00:00.346) 0:07:54.142 ********** 2026-03-02 00:56:38.191063 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191067 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191071 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191074 | orchestrator | 2026-03-02 00:56:38.191078 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-02 00:56:38.191082 | orchestrator | Monday 02 March 2026 00:54:03 +0000 (0:00:00.237) 0:07:54.380 ********** 2026-03-02 00:56:38.191085 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191089 | orchestrator | 2026-03-02 00:56:38.191093 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-02 00:56:38.191097 | orchestrator | Monday 02 March 2026 00:54:03 +0000 (0:00:00.206) 0:07:54.586 ********** 2026-03-02 00:56:38.191100 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191104 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191111 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191114 | orchestrator | 2026-03-02 00:56:38.191118 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-02 00:56:38.191122 | orchestrator | Monday 02 March 2026 00:54:04 +0000 (0:00:00.229) 0:07:54.816 ********** 2026-03-02 00:56:38.191126 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191129 | orchestrator | 2026-03-02 00:56:38.191133 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-02 00:56:38.191137 | orchestrator | Monday 02 March 2026 00:54:04 +0000 (0:00:00.179) 0:07:54.996 ********** 2026-03-02 00:56:38.191140 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191144 | orchestrator | 2026-03-02 00:56:38.191148 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-02 00:56:38.191152 | orchestrator | Monday 02 March 2026 00:54:04 +0000 (0:00:00.198) 0:07:55.194 ********** 2026-03-02 00:56:38.191155 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191159 | orchestrator | 2026-03-02 00:56:38.191163 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-02 00:56:38.191166 | orchestrator | Monday 02 March 2026 00:54:04 +0000 (0:00:00.118) 0:07:55.313 ********** 2026-03-02 00:56:38.191172 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191176 | orchestrator | 2026-03-02 00:56:38.191180 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-02 00:56:38.191184 | orchestrator | Monday 02 March 2026 00:54:05 +0000 (0:00:00.568) 0:07:55.882 ********** 2026-03-02 00:56:38.191187 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191191 | orchestrator | 2026-03-02 00:56:38.191197 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-02 00:56:38.191201 | orchestrator | Monday 02 March 2026 00:54:05 +0000 (0:00:00.230) 0:07:56.112 ********** 2026-03-02 00:56:38.191205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.191208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.191212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.191216 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191220 | orchestrator | 2026-03-02 00:56:38.191223 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-02 00:56:38.191227 | orchestrator | Monday 02 March 2026 00:54:05 +0000 (0:00:00.379) 0:07:56.491 ********** 2026-03-02 00:56:38.191231 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191234 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191238 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191242 | orchestrator | 2026-03-02 00:56:38.191246 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-02 00:56:38.191249 | orchestrator | Monday 02 March 2026 00:54:06 +0000 (0:00:00.290) 0:07:56.782 ********** 2026-03-02 00:56:38.191253 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191257 | orchestrator | 2026-03-02 00:56:38.191260 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-02 00:56:38.191264 | orchestrator | Monday 02 March 2026 00:54:06 +0000 (0:00:00.192) 0:07:56.974 ********** 2026-03-02 00:56:38.191268 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191271 | orchestrator | 2026-03-02 00:56:38.191275 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-02 00:56:38.191279 | orchestrator | 2026-03-02 00:56:38.191283 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-02 00:56:38.191286 | orchestrator | Monday 02 March 2026 00:54:07 +0000 (0:00:00.885) 0:07:57.860 ********** 2026-03-02 00:56:38.191290 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.191295 | orchestrator | 2026-03-02 00:56:38.191298 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-02 00:56:38.191305 | orchestrator | Monday 02 March 2026 00:54:08 +0000 (0:00:01.254) 0:07:59.114 ********** 2026-03-02 00:56:38.191309 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.191313 | orchestrator | 2026-03-02 00:56:38.191316 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-02 00:56:38.191320 | orchestrator | Monday 02 March 2026 00:54:09 +0000 (0:00:01.289) 0:08:00.403 ********** 2026-03-02 00:56:38.191324 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191327 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191331 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191335 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.191339 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.191342 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.191346 | orchestrator | 2026-03-02 00:56:38.191350 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-02 00:56:38.191354 | orchestrator | Monday 02 March 2026 00:54:10 +0000 (0:00:00.989) 0:08:01.393 ********** 2026-03-02 00:56:38.191357 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.191361 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.191365 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.191368 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.191372 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.191376 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.191379 | orchestrator | 2026-03-02 00:56:38.191383 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-02 00:56:38.191387 | orchestrator | Monday 02 March 2026 00:54:11 +0000 (0:00:00.653) 0:08:02.047 ********** 2026-03-02 00:56:38.191391 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.191394 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.191398 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.191402 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.191405 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.191409 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.191413 | orchestrator | 2026-03-02 00:56:38.191417 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-02 00:56:38.191420 | orchestrator | Monday 02 March 2026 00:54:12 +0000 (0:00:00.954) 0:08:03.001 ********** 2026-03-02 00:56:38.191424 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.191428 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.191431 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.191435 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.191439 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.191442 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.191446 | orchestrator | 2026-03-02 00:56:38.191450 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-02 00:56:38.191454 | orchestrator | Monday 02 March 2026 00:54:12 +0000 (0:00:00.666) 0:08:03.668 ********** 2026-03-02 00:56:38.191457 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191461 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191465 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191468 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.191472 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.191476 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.191480 | orchestrator | 2026-03-02 00:56:38.191483 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-02 00:56:38.191489 | orchestrator | Monday 02 March 2026 00:54:14 +0000 (0:00:01.159) 0:08:04.827 ********** 2026-03-02 00:56:38.191493 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191497 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191500 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191504 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.191508 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.191512 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.191518 | orchestrator | 2026-03-02 00:56:38.191524 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-02 00:56:38.191528 | orchestrator | Monday 02 March 2026 00:54:14 +0000 (0:00:00.531) 0:08:05.359 ********** 2026-03-02 00:56:38.191531 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191535 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191539 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191542 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.191546 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.191550 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.191553 | orchestrator | 2026-03-02 00:56:38.191557 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-02 00:56:38.191561 | orchestrator | Monday 02 March 2026 00:54:15 +0000 (0:00:00.701) 0:08:06.060 ********** 2026-03-02 00:56:38.191565 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.191568 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.191572 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.191576 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.191579 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.191583 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.191587 | orchestrator | 2026-03-02 00:56:38.191591 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-02 00:56:38.191595 | orchestrator | Monday 02 March 2026 00:54:16 +0000 (0:00:00.943) 0:08:07.004 ********** 2026-03-02 00:56:38.191598 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.191602 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.191606 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.191609 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.191613 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.191617 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.191620 | orchestrator | 2026-03-02 00:56:38.191624 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-02 00:56:38.191628 | orchestrator | Monday 02 March 2026 00:54:17 +0000 (0:00:01.224) 0:08:08.228 ********** 2026-03-02 00:56:38.191632 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191635 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191639 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191643 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.191646 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.191650 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.191654 | orchestrator | 2026-03-02 00:56:38.191657 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-02 00:56:38.191661 | orchestrator | Monday 02 March 2026 00:54:17 +0000 (0:00:00.453) 0:08:08.682 ********** 2026-03-02 00:56:38.191665 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191669 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191672 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191676 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.191680 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.191683 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.191687 | orchestrator | 2026-03-02 00:56:38.191691 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-02 00:56:38.191695 | orchestrator | Monday 02 March 2026 00:54:18 +0000 (0:00:00.677) 0:08:09.360 ********** 2026-03-02 00:56:38.191698 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.191702 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.191706 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.191709 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.191713 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.191717 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.191720 | orchestrator | 2026-03-02 00:56:38.191724 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-02 00:56:38.191728 | orchestrator | Monday 02 March 2026 00:54:19 +0000 (0:00:00.529) 0:08:09.889 ********** 2026-03-02 00:56:38.191734 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.191738 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.191742 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.191745 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.191749 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.191761 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.191765 | orchestrator | 2026-03-02 00:56:38.191768 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-02 00:56:38.191772 | orchestrator | Monday 02 March 2026 00:54:19 +0000 (0:00:00.708) 0:08:10.598 ********** 2026-03-02 00:56:38.191776 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.191779 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.191783 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.191787 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.191791 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.191794 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.191798 | orchestrator | 2026-03-02 00:56:38.191802 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-02 00:56:38.191806 | orchestrator | Monday 02 March 2026 00:54:20 +0000 (0:00:00.556) 0:08:11.154 ********** 2026-03-02 00:56:38.191809 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191813 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191817 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191820 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.191824 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.191828 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.191831 | orchestrator | 2026-03-02 00:56:38.191835 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-02 00:56:38.191839 | orchestrator | Monday 02 March 2026 00:54:20 +0000 (0:00:00.612) 0:08:11.767 ********** 2026-03-02 00:56:38.191843 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191846 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191850 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191854 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:56:38.191857 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:56:38.191861 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:56:38.191865 | orchestrator | 2026-03-02 00:56:38.191868 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-02 00:56:38.191875 | orchestrator | Monday 02 March 2026 00:54:21 +0000 (0:00:00.500) 0:08:12.267 ********** 2026-03-02 00:56:38.191879 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.191882 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.191886 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.191890 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.191893 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.191897 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.191901 | orchestrator | 2026-03-02 00:56:38.191907 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-02 00:56:38.191911 | orchestrator | Monday 02 March 2026 00:54:22 +0000 (0:00:00.673) 0:08:12.941 ********** 2026-03-02 00:56:38.191914 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.191918 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.191922 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.191925 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.191929 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.191933 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.191936 | orchestrator | 2026-03-02 00:56:38.191940 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-02 00:56:38.191944 | orchestrator | Monday 02 March 2026 00:54:22 +0000 (0:00:00.556) 0:08:13.498 ********** 2026-03-02 00:56:38.191948 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.191951 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.191955 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.191959 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.191962 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.191969 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.191972 | orchestrator | 2026-03-02 00:56:38.191976 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-02 00:56:38.191980 | orchestrator | Monday 02 March 2026 00:54:24 +0000 (0:00:01.296) 0:08:14.794 ********** 2026-03-02 00:56:38.191984 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-02 00:56:38.191987 | orchestrator | 2026-03-02 00:56:38.191991 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-02 00:56:38.191995 | orchestrator | Monday 02 March 2026 00:54:28 +0000 (0:00:04.024) 0:08:18.819 ********** 2026-03-02 00:56:38.191999 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-02 00:56:38.192002 | orchestrator | 2026-03-02 00:56:38.192006 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-02 00:56:38.192010 | orchestrator | Monday 02 March 2026 00:54:30 +0000 (0:00:01.999) 0:08:20.818 ********** 2026-03-02 00:56:38.192014 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.192017 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.192021 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.192025 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.192029 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.192032 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.192036 | orchestrator | 2026-03-02 00:56:38.192040 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-02 00:56:38.192043 | orchestrator | Monday 02 March 2026 00:54:31 +0000 (0:00:01.764) 0:08:22.583 ********** 2026-03-02 00:56:38.192047 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.192051 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.192055 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.192058 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.192062 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.192066 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.192069 | orchestrator | 2026-03-02 00:56:38.192073 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-02 00:56:38.192077 | orchestrator | Monday 02 March 2026 00:54:32 +0000 (0:00:00.950) 0:08:23.534 ********** 2026-03-02 00:56:38.192081 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.192085 | orchestrator | 2026-03-02 00:56:38.192089 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-02 00:56:38.192093 | orchestrator | Monday 02 March 2026 00:54:33 +0000 (0:00:01.048) 0:08:24.582 ********** 2026-03-02 00:56:38.192097 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.192100 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.192104 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.192108 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.192111 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.192115 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.192119 | orchestrator | 2026-03-02 00:56:38.192123 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-02 00:56:38.192126 | orchestrator | Monday 02 March 2026 00:54:35 +0000 (0:00:01.685) 0:08:26.267 ********** 2026-03-02 00:56:38.192130 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.192134 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.192138 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.192141 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.192145 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.192149 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.192152 | orchestrator | 2026-03-02 00:56:38.192156 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-02 00:56:38.192160 | orchestrator | Monday 02 March 2026 00:54:38 +0000 (0:00:03.238) 0:08:29.505 ********** 2026-03-02 00:56:38.192164 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:56:38.192170 | orchestrator | 2026-03-02 00:56:38.192173 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-02 00:56:38.192177 | orchestrator | Monday 02 March 2026 00:54:39 +0000 (0:00:01.202) 0:08:30.708 ********** 2026-03-02 00:56:38.192181 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192185 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192188 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192192 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.192196 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.192199 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.192203 | orchestrator | 2026-03-02 00:56:38.192207 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-02 00:56:38.192212 | orchestrator | Monday 02 March 2026 00:54:40 +0000 (0:00:00.763) 0:08:31.471 ********** 2026-03-02 00:56:38.192216 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.192220 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.192224 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.192228 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:56:38.192231 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:56:38.192237 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:56:38.192241 | orchestrator | 2026-03-02 00:56:38.192245 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-02 00:56:38.192249 | orchestrator | Monday 02 March 2026 00:54:42 +0000 (0:00:02.193) 0:08:33.665 ********** 2026-03-02 00:56:38.192252 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192256 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192260 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192263 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:56:38.192267 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:56:38.192271 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:56:38.192274 | orchestrator | 2026-03-02 00:56:38.192278 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-02 00:56:38.192282 | orchestrator | 2026-03-02 00:56:38.192286 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-02 00:56:38.192289 | orchestrator | Monday 02 March 2026 00:54:43 +0000 (0:00:01.026) 0:08:34.692 ********** 2026-03-02 00:56:38.192293 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.192297 | orchestrator | 2026-03-02 00:56:38.192301 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-02 00:56:38.192304 | orchestrator | Monday 02 March 2026 00:54:44 +0000 (0:00:00.447) 0:08:35.139 ********** 2026-03-02 00:56:38.192308 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.192312 | orchestrator | 2026-03-02 00:56:38.192316 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-02 00:56:38.192319 | orchestrator | Monday 02 March 2026 00:54:45 +0000 (0:00:00.730) 0:08:35.869 ********** 2026-03-02 00:56:38.192323 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.192327 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.192331 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.192334 | orchestrator | 2026-03-02 00:56:38.192338 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-02 00:56:38.192342 | orchestrator | Monday 02 March 2026 00:54:45 +0000 (0:00:00.281) 0:08:36.151 ********** 2026-03-02 00:56:38.192346 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192349 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192353 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192357 | orchestrator | 2026-03-02 00:56:38.192361 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-02 00:56:38.192364 | orchestrator | Monday 02 March 2026 00:54:46 +0000 (0:00:00.640) 0:08:36.792 ********** 2026-03-02 00:56:38.192371 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192374 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192378 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192382 | orchestrator | 2026-03-02 00:56:38.192386 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-02 00:56:38.192389 | orchestrator | Monday 02 March 2026 00:54:46 +0000 (0:00:00.814) 0:08:37.607 ********** 2026-03-02 00:56:38.192393 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192397 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192401 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192404 | orchestrator | 2026-03-02 00:56:38.192408 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-02 00:56:38.192412 | orchestrator | Monday 02 March 2026 00:54:47 +0000 (0:00:00.737) 0:08:38.344 ********** 2026-03-02 00:56:38.192415 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.192419 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.192423 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.192427 | orchestrator | 2026-03-02 00:56:38.192430 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-02 00:56:38.192434 | orchestrator | Monday 02 March 2026 00:54:47 +0000 (0:00:00.285) 0:08:38.630 ********** 2026-03-02 00:56:38.192438 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.192442 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.192446 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.192449 | orchestrator | 2026-03-02 00:56:38.192453 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-02 00:56:38.192457 | orchestrator | Monday 02 March 2026 00:54:48 +0000 (0:00:00.262) 0:08:38.893 ********** 2026-03-02 00:56:38.192460 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.192464 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.192468 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.192472 | orchestrator | 2026-03-02 00:56:38.192475 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-02 00:56:38.192479 | orchestrator | Monday 02 March 2026 00:54:48 +0000 (0:00:00.452) 0:08:39.345 ********** 2026-03-02 00:56:38.192483 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192487 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192490 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192494 | orchestrator | 2026-03-02 00:56:38.192498 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-02 00:56:38.192502 | orchestrator | Monday 02 March 2026 00:54:49 +0000 (0:00:00.689) 0:08:40.034 ********** 2026-03-02 00:56:38.192505 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192509 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192513 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192516 | orchestrator | 2026-03-02 00:56:38.192520 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-02 00:56:38.192524 | orchestrator | Monday 02 March 2026 00:54:49 +0000 (0:00:00.664) 0:08:40.699 ********** 2026-03-02 00:56:38.192528 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.192531 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.192535 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.192539 | orchestrator | 2026-03-02 00:56:38.192543 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-02 00:56:38.192549 | orchestrator | Monday 02 March 2026 00:54:50 +0000 (0:00:00.273) 0:08:40.972 ********** 2026-03-02 00:56:38.192553 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.192557 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.192560 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.192564 | orchestrator | 2026-03-02 00:56:38.192568 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-02 00:56:38.192575 | orchestrator | Monday 02 March 2026 00:54:50 +0000 (0:00:00.521) 0:08:41.494 ********** 2026-03-02 00:56:38.192579 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192585 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192588 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192592 | orchestrator | 2026-03-02 00:56:38.192596 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-02 00:56:38.192600 | orchestrator | Monday 02 March 2026 00:54:50 +0000 (0:00:00.275) 0:08:41.770 ********** 2026-03-02 00:56:38.192603 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192607 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192611 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192614 | orchestrator | 2026-03-02 00:56:38.192618 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-02 00:56:38.192622 | orchestrator | Monday 02 March 2026 00:54:51 +0000 (0:00:00.309) 0:08:42.080 ********** 2026-03-02 00:56:38.192626 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192629 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192633 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192637 | orchestrator | 2026-03-02 00:56:38.192640 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-02 00:56:38.192644 | orchestrator | Monday 02 March 2026 00:54:51 +0000 (0:00:00.287) 0:08:42.368 ********** 2026-03-02 00:56:38.192648 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.192652 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.192655 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.192659 | orchestrator | 2026-03-02 00:56:38.192663 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-02 00:56:38.192667 | orchestrator | Monday 02 March 2026 00:54:52 +0000 (0:00:00.463) 0:08:42.831 ********** 2026-03-02 00:56:38.192670 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.192674 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.192678 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.192681 | orchestrator | 2026-03-02 00:56:38.192685 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-02 00:56:38.192689 | orchestrator | Monday 02 March 2026 00:54:52 +0000 (0:00:00.262) 0:08:43.094 ********** 2026-03-02 00:56:38.192693 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.192696 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.192700 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.192704 | orchestrator | 2026-03-02 00:56:38.192707 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-02 00:56:38.192711 | orchestrator | Monday 02 March 2026 00:54:52 +0000 (0:00:00.288) 0:08:43.382 ********** 2026-03-02 00:56:38.192715 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192719 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192722 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192726 | orchestrator | 2026-03-02 00:56:38.192730 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-02 00:56:38.192733 | orchestrator | Monday 02 March 2026 00:54:52 +0000 (0:00:00.281) 0:08:43.663 ********** 2026-03-02 00:56:38.192737 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.192741 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.192745 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.192748 | orchestrator | 2026-03-02 00:56:38.192759 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-02 00:56:38.192763 | orchestrator | Monday 02 March 2026 00:54:53 +0000 (0:00:00.652) 0:08:44.316 ********** 2026-03-02 00:56:38.192767 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.192771 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.192774 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-02 00:56:38.192778 | orchestrator | 2026-03-02 00:56:38.192782 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-02 00:56:38.192786 | orchestrator | Monday 02 March 2026 00:54:53 +0000 (0:00:00.368) 0:08:44.684 ********** 2026-03-02 00:56:38.192789 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-02 00:56:38.192793 | orchestrator | 2026-03-02 00:56:38.192799 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-02 00:56:38.192803 | orchestrator | Monday 02 March 2026 00:54:56 +0000 (0:00:02.571) 0:08:47.256 ********** 2026-03-02 00:56:38.192808 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-02 00:56:38.192813 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.192817 | orchestrator | 2026-03-02 00:56:38.192821 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-02 00:56:38.192824 | orchestrator | Monday 02 March 2026 00:54:56 +0000 (0:00:00.448) 0:08:47.704 ********** 2026-03-02 00:56:38.192829 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-02 00:56:38.192837 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-02 00:56:38.192840 | orchestrator | 2026-03-02 00:56:38.192846 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-02 00:56:38.192850 | orchestrator | Monday 02 March 2026 00:55:04 +0000 (0:00:07.803) 0:08:55.508 ********** 2026-03-02 00:56:38.192854 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-02 00:56:38.192858 | orchestrator | 2026-03-02 00:56:38.192861 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-02 00:56:38.192867 | orchestrator | Monday 02 March 2026 00:55:08 +0000 (0:00:03.381) 0:08:58.889 ********** 2026-03-02 00:56:38.192871 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.192875 | orchestrator | 2026-03-02 00:56:38.192878 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-02 00:56:38.192882 | orchestrator | Monday 02 March 2026 00:55:08 +0000 (0:00:00.568) 0:08:59.458 ********** 2026-03-02 00:56:38.192886 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-02 00:56:38.192890 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-02 00:56:38.192893 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-02 00:56:38.192897 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-02 00:56:38.192901 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-02 00:56:38.192904 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-02 00:56:38.192908 | orchestrator | 2026-03-02 00:56:38.192912 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-02 00:56:38.192916 | orchestrator | Monday 02 March 2026 00:55:09 +0000 (0:00:00.948) 0:09:00.406 ********** 2026-03-02 00:56:38.192919 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.192923 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-02 00:56:38.192927 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-02 00:56:38.192931 | orchestrator | 2026-03-02 00:56:38.192934 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-02 00:56:38.192938 | orchestrator | Monday 02 March 2026 00:55:11 +0000 (0:00:02.355) 0:09:02.761 ********** 2026-03-02 00:56:38.192942 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-02 00:56:38.192945 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-02 00:56:38.192949 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.192953 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-02 00:56:38.192959 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-02 00:56:38.192963 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.192967 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-02 00:56:38.192971 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-02 00:56:38.192974 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.192978 | orchestrator | 2026-03-02 00:56:38.192982 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-02 00:56:38.192985 | orchestrator | Monday 02 March 2026 00:55:13 +0000 (0:00:01.990) 0:09:04.751 ********** 2026-03-02 00:56:38.192989 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.192993 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.192997 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.193000 | orchestrator | 2026-03-02 00:56:38.193004 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-02 00:56:38.193008 | orchestrator | Monday 02 March 2026 00:55:16 +0000 (0:00:02.406) 0:09:07.158 ********** 2026-03-02 00:56:38.193012 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193015 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.193019 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.193023 | orchestrator | 2026-03-02 00:56:38.193026 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-02 00:56:38.193030 | orchestrator | Monday 02 March 2026 00:55:16 +0000 (0:00:00.303) 0:09:07.462 ********** 2026-03-02 00:56:38.193034 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.193037 | orchestrator | 2026-03-02 00:56:38.193041 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-02 00:56:38.193045 | orchestrator | Monday 02 March 2026 00:55:17 +0000 (0:00:00.762) 0:09:08.224 ********** 2026-03-02 00:56:38.193049 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.193052 | orchestrator | 2026-03-02 00:56:38.193056 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-02 00:56:38.193060 | orchestrator | Monday 02 March 2026 00:55:17 +0000 (0:00:00.538) 0:09:08.763 ********** 2026-03-02 00:56:38.193064 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.193067 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.193071 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.193075 | orchestrator | 2026-03-02 00:56:38.193078 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-02 00:56:38.193082 | orchestrator | Monday 02 March 2026 00:55:19 +0000 (0:00:01.205) 0:09:09.969 ********** 2026-03-02 00:56:38.193086 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.193090 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.193093 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.193097 | orchestrator | 2026-03-02 00:56:38.193101 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-02 00:56:38.193105 | orchestrator | Monday 02 March 2026 00:55:20 +0000 (0:00:01.531) 0:09:11.501 ********** 2026-03-02 00:56:38.193108 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.193112 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.193116 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.193119 | orchestrator | 2026-03-02 00:56:38.193123 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-02 00:56:38.193129 | orchestrator | Monday 02 March 2026 00:55:22 +0000 (0:00:01.645) 0:09:13.146 ********** 2026-03-02 00:56:38.193133 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.193136 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.193140 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.193144 | orchestrator | 2026-03-02 00:56:38.193148 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-02 00:56:38.193151 | orchestrator | Monday 02 March 2026 00:55:23 +0000 (0:00:01.623) 0:09:14.770 ********** 2026-03-02 00:56:38.193157 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193161 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193165 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193169 | orchestrator | 2026-03-02 00:56:38.193173 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-02 00:56:38.193176 | orchestrator | Monday 02 March 2026 00:55:25 +0000 (0:00:01.172) 0:09:15.942 ********** 2026-03-02 00:56:38.193180 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.193184 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.193188 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.193191 | orchestrator | 2026-03-02 00:56:38.193195 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-02 00:56:38.193199 | orchestrator | Monday 02 March 2026 00:55:25 +0000 (0:00:00.560) 0:09:16.503 ********** 2026-03-02 00:56:38.193203 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.193206 | orchestrator | 2026-03-02 00:56:38.193210 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-02 00:56:38.193214 | orchestrator | Monday 02 March 2026 00:55:26 +0000 (0:00:00.622) 0:09:17.125 ********** 2026-03-02 00:56:38.193218 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193221 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193225 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193229 | orchestrator | 2026-03-02 00:56:38.193232 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-02 00:56:38.193236 | orchestrator | Monday 02 March 2026 00:55:26 +0000 (0:00:00.306) 0:09:17.431 ********** 2026-03-02 00:56:38.193240 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.193244 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.193247 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.193251 | orchestrator | 2026-03-02 00:56:38.193255 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-02 00:56:38.193259 | orchestrator | Monday 02 March 2026 00:55:27 +0000 (0:00:01.210) 0:09:18.642 ********** 2026-03-02 00:56:38.193262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.193266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.193270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.193274 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193277 | orchestrator | 2026-03-02 00:56:38.193281 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-02 00:56:38.193285 | orchestrator | Monday 02 March 2026 00:55:28 +0000 (0:00:00.753) 0:09:19.395 ********** 2026-03-02 00:56:38.193289 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193292 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193296 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193300 | orchestrator | 2026-03-02 00:56:38.193304 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-02 00:56:38.193307 | orchestrator | 2026-03-02 00:56:38.193311 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-02 00:56:38.193315 | orchestrator | Monday 02 March 2026 00:55:29 +0000 (0:00:00.645) 0:09:20.041 ********** 2026-03-02 00:56:38.193318 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.193322 | orchestrator | 2026-03-02 00:56:38.193326 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-02 00:56:38.193330 | orchestrator | Monday 02 March 2026 00:55:29 +0000 (0:00:00.433) 0:09:20.474 ********** 2026-03-02 00:56:38.193333 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.193337 | orchestrator | 2026-03-02 00:56:38.193341 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-02 00:56:38.193347 | orchestrator | Monday 02 March 2026 00:55:30 +0000 (0:00:00.570) 0:09:21.044 ********** 2026-03-02 00:56:38.193350 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193354 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.193358 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.193362 | orchestrator | 2026-03-02 00:56:38.193365 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-02 00:56:38.193369 | orchestrator | Monday 02 March 2026 00:55:30 +0000 (0:00:00.296) 0:09:21.341 ********** 2026-03-02 00:56:38.193373 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193377 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193380 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193384 | orchestrator | 2026-03-02 00:56:38.193388 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-02 00:56:38.193392 | orchestrator | Monday 02 March 2026 00:55:31 +0000 (0:00:00.692) 0:09:22.033 ********** 2026-03-02 00:56:38.193395 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193399 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193403 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193406 | orchestrator | 2026-03-02 00:56:38.193410 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-02 00:56:38.193414 | orchestrator | Monday 02 March 2026 00:55:32 +0000 (0:00:00.818) 0:09:22.852 ********** 2026-03-02 00:56:38.193418 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193422 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193425 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193429 | orchestrator | 2026-03-02 00:56:38.193477 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-02 00:56:38.193491 | orchestrator | Monday 02 March 2026 00:55:32 +0000 (0:00:00.633) 0:09:23.485 ********** 2026-03-02 00:56:38.193495 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193502 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.193506 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.193510 | orchestrator | 2026-03-02 00:56:38.193514 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-02 00:56:38.193517 | orchestrator | Monday 02 March 2026 00:55:32 +0000 (0:00:00.271) 0:09:23.757 ********** 2026-03-02 00:56:38.193521 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193527 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.193530 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.193534 | orchestrator | 2026-03-02 00:56:38.193538 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-02 00:56:38.193542 | orchestrator | Monday 02 March 2026 00:55:33 +0000 (0:00:00.336) 0:09:24.093 ********** 2026-03-02 00:56:38.193545 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193549 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.193553 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.193556 | orchestrator | 2026-03-02 00:56:38.193560 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-02 00:56:38.193564 | orchestrator | Monday 02 March 2026 00:55:33 +0000 (0:00:00.576) 0:09:24.670 ********** 2026-03-02 00:56:38.193568 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193571 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193575 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193579 | orchestrator | 2026-03-02 00:56:38.193582 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-02 00:56:38.193586 | orchestrator | Monday 02 March 2026 00:55:34 +0000 (0:00:00.691) 0:09:25.361 ********** 2026-03-02 00:56:38.193590 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193594 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193597 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193601 | orchestrator | 2026-03-02 00:56:38.193605 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-02 00:56:38.193608 | orchestrator | Monday 02 March 2026 00:55:35 +0000 (0:00:00.687) 0:09:26.049 ********** 2026-03-02 00:56:38.193615 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193619 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.193622 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.193626 | orchestrator | 2026-03-02 00:56:38.193630 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-02 00:56:38.193633 | orchestrator | Monday 02 March 2026 00:55:35 +0000 (0:00:00.321) 0:09:26.371 ********** 2026-03-02 00:56:38.193637 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193641 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.193644 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.193648 | orchestrator | 2026-03-02 00:56:38.193652 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-02 00:56:38.193656 | orchestrator | Monday 02 March 2026 00:55:36 +0000 (0:00:00.522) 0:09:26.894 ********** 2026-03-02 00:56:38.193659 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193663 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193667 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193670 | orchestrator | 2026-03-02 00:56:38.193674 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-02 00:56:38.193678 | orchestrator | Monday 02 March 2026 00:55:36 +0000 (0:00:00.315) 0:09:27.209 ********** 2026-03-02 00:56:38.193682 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193685 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193689 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193693 | orchestrator | 2026-03-02 00:56:38.193696 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-02 00:56:38.193700 | orchestrator | Monday 02 March 2026 00:55:36 +0000 (0:00:00.298) 0:09:27.508 ********** 2026-03-02 00:56:38.193704 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193708 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193711 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193715 | orchestrator | 2026-03-02 00:56:38.193719 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-02 00:56:38.193722 | orchestrator | Monday 02 March 2026 00:55:37 +0000 (0:00:00.307) 0:09:27.815 ********** 2026-03-02 00:56:38.193726 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193730 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.193733 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.193737 | orchestrator | 2026-03-02 00:56:38.193741 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-02 00:56:38.193745 | orchestrator | Monday 02 March 2026 00:55:37 +0000 (0:00:00.435) 0:09:28.251 ********** 2026-03-02 00:56:38.193748 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193769 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.193773 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.193777 | orchestrator | 2026-03-02 00:56:38.193781 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-02 00:56:38.193785 | orchestrator | Monday 02 March 2026 00:55:37 +0000 (0:00:00.274) 0:09:28.525 ********** 2026-03-02 00:56:38.193788 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193792 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.193796 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.193800 | orchestrator | 2026-03-02 00:56:38.193803 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-02 00:56:38.193807 | orchestrator | Monday 02 March 2026 00:55:38 +0000 (0:00:00.296) 0:09:28.822 ********** 2026-03-02 00:56:38.193811 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193815 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193819 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193822 | orchestrator | 2026-03-02 00:56:38.193826 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-02 00:56:38.193830 | orchestrator | Monday 02 March 2026 00:55:38 +0000 (0:00:00.301) 0:09:29.123 ********** 2026-03-02 00:56:38.193834 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.193837 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.193844 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.193847 | orchestrator | 2026-03-02 00:56:38.193851 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-02 00:56:38.193855 | orchestrator | Monday 02 March 2026 00:55:38 +0000 (0:00:00.646) 0:09:29.770 ********** 2026-03-02 00:56:38.193861 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.193865 | orchestrator | 2026-03-02 00:56:38.193869 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-02 00:56:38.193872 | orchestrator | Monday 02 March 2026 00:55:39 +0000 (0:00:00.473) 0:09:30.244 ********** 2026-03-02 00:56:38.193876 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.193882 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-02 00:56:38.193886 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-02 00:56:38.193890 | orchestrator | 2026-03-02 00:56:38.193893 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-02 00:56:38.193897 | orchestrator | Monday 02 March 2026 00:55:41 +0000 (0:00:01.958) 0:09:32.202 ********** 2026-03-02 00:56:38.193901 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-02 00:56:38.193905 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-02 00:56:38.193909 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.193912 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-02 00:56:38.193916 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-02 00:56:38.193920 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.193924 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-02 00:56:38.193927 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-02 00:56:38.193931 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.193935 | orchestrator | 2026-03-02 00:56:38.193939 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-02 00:56:38.193943 | orchestrator | Monday 02 March 2026 00:55:42 +0000 (0:00:01.260) 0:09:33.463 ********** 2026-03-02 00:56:38.193947 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.193950 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.193954 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.193958 | orchestrator | 2026-03-02 00:56:38.193962 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-02 00:56:38.193965 | orchestrator | Monday 02 March 2026 00:55:42 +0000 (0:00:00.306) 0:09:33.770 ********** 2026-03-02 00:56:38.193969 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.193973 | orchestrator | 2026-03-02 00:56:38.193977 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-02 00:56:38.193981 | orchestrator | Monday 02 March 2026 00:55:43 +0000 (0:00:00.503) 0:09:34.273 ********** 2026-03-02 00:56:38.193984 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.193988 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.193992 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.193996 | orchestrator | 2026-03-02 00:56:38.194000 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-02 00:56:38.194004 | orchestrator | Monday 02 March 2026 00:55:44 +0000 (0:00:01.004) 0:09:35.278 ********** 2026-03-02 00:56:38.194007 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.194042 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-02 00:56:38.194050 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.194054 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-02 00:56:38.194058 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.194061 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-02 00:56:38.194065 | orchestrator | 2026-03-02 00:56:38.194069 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-02 00:56:38.194073 | orchestrator | Monday 02 March 2026 00:55:48 +0000 (0:00:04.162) 0:09:39.440 ********** 2026-03-02 00:56:38.194076 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.194080 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-02 00:56:38.194084 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.194087 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-02 00:56:38.194091 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:56:38.194095 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-02 00:56:38.194098 | orchestrator | 2026-03-02 00:56:38.194102 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-02 00:56:38.194106 | orchestrator | Monday 02 March 2026 00:55:51 +0000 (0:00:02.655) 0:09:42.095 ********** 2026-03-02 00:56:38.194110 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-02 00:56:38.194113 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.194117 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-02 00:56:38.194121 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-02 00:56:38.194125 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.194128 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.194132 | orchestrator | 2026-03-02 00:56:38.194138 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-02 00:56:38.194142 | orchestrator | Monday 02 March 2026 00:55:52 +0000 (0:00:01.197) 0:09:43.293 ********** 2026-03-02 00:56:38.194146 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-02 00:56:38.194150 | orchestrator | 2026-03-02 00:56:38.194155 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-02 00:56:38.194159 | orchestrator | Monday 02 March 2026 00:55:52 +0000 (0:00:00.249) 0:09:43.542 ********** 2026-03-02 00:56:38.194163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-02 00:56:38.194167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-02 00:56:38.194171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-02 00:56:38.194175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-02 00:56:38.194179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-02 00:56:38.194182 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.194186 | orchestrator | 2026-03-02 00:56:38.194190 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-02 00:56:38.194194 | orchestrator | Monday 02 March 2026 00:55:53 +0000 (0:00:00.860) 0:09:44.403 ********** 2026-03-02 00:56:38.194197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-02 00:56:38.194204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-02 00:56:38.194207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-02 00:56:38.194211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-02 00:56:38.194215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-02 00:56:38.194219 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.194222 | orchestrator | 2026-03-02 00:56:38.194226 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-02 00:56:38.194230 | orchestrator | Monday 02 March 2026 00:55:54 +0000 (0:00:00.519) 0:09:44.923 ********** 2026-03-02 00:56:38.194234 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-02 00:56:38.194237 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-02 00:56:38.194241 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-02 00:56:38.194245 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-02 00:56:38.194249 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-02 00:56:38.194255 | orchestrator | 2026-03-02 00:56:38.194261 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-02 00:56:38.194267 | orchestrator | Monday 02 March 2026 00:56:24 +0000 (0:00:30.330) 0:10:15.254 ********** 2026-03-02 00:56:38.194273 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.194279 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.194285 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.194290 | orchestrator | 2026-03-02 00:56:38.194296 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-02 00:56:38.194303 | orchestrator | Monday 02 March 2026 00:56:24 +0000 (0:00:00.332) 0:10:15.586 ********** 2026-03-02 00:56:38.194308 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.194314 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.194321 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.194327 | orchestrator | 2026-03-02 00:56:38.194333 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-02 00:56:38.194338 | orchestrator | Monday 02 March 2026 00:56:25 +0000 (0:00:00.332) 0:10:15.918 ********** 2026-03-02 00:56:38.194344 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.194350 | orchestrator | 2026-03-02 00:56:38.194355 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-02 00:56:38.194361 | orchestrator | Monday 02 March 2026 00:56:25 +0000 (0:00:00.802) 0:10:16.721 ********** 2026-03-02 00:56:38.194370 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.194376 | orchestrator | 2026-03-02 00:56:38.194382 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-02 00:56:38.194388 | orchestrator | Monday 02 March 2026 00:56:26 +0000 (0:00:00.530) 0:10:17.251 ********** 2026-03-02 00:56:38.194392 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.194395 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.194405 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.194409 | orchestrator | 2026-03-02 00:56:38.194413 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-02 00:56:38.194416 | orchestrator | Monday 02 March 2026 00:56:27 +0000 (0:00:01.350) 0:10:18.602 ********** 2026-03-02 00:56:38.194420 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.194424 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.194427 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.194431 | orchestrator | 2026-03-02 00:56:38.194435 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-02 00:56:38.194438 | orchestrator | Monday 02 March 2026 00:56:29 +0000 (0:00:01.629) 0:10:20.232 ********** 2026-03-02 00:56:38.194442 | orchestrator | changed: [testbed-node-3] 2026-03-02 00:56:38.194446 | orchestrator | changed: [testbed-node-4] 2026-03-02 00:56:38.194449 | orchestrator | changed: [testbed-node-5] 2026-03-02 00:56:38.194453 | orchestrator | 2026-03-02 00:56:38.194457 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-02 00:56:38.194460 | orchestrator | Monday 02 March 2026 00:56:31 +0000 (0:00:02.192) 0:10:22.424 ********** 2026-03-02 00:56:38.194464 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.194468 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.194472 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-02 00:56:38.194475 | orchestrator | 2026-03-02 00:56:38.194479 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-02 00:56:38.194483 | orchestrator | Monday 02 March 2026 00:56:34 +0000 (0:00:02.769) 0:10:25.193 ********** 2026-03-02 00:56:38.194486 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.194490 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.194494 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.194497 | orchestrator | 2026-03-02 00:56:38.194501 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-02 00:56:38.194505 | orchestrator | Monday 02 March 2026 00:56:34 +0000 (0:00:00.309) 0:10:25.503 ********** 2026-03-02 00:56:38.194508 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:56:38.194512 | orchestrator | 2026-03-02 00:56:38.194516 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-02 00:56:38.194519 | orchestrator | Monday 02 March 2026 00:56:35 +0000 (0:00:00.507) 0:10:26.010 ********** 2026-03-02 00:56:38.194523 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.194527 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.194530 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.194534 | orchestrator | 2026-03-02 00:56:38.194538 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-02 00:56:38.194541 | orchestrator | Monday 02 March 2026 00:56:35 +0000 (0:00:00.427) 0:10:26.438 ********** 2026-03-02 00:56:38.194545 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.194549 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:56:38.194552 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:56:38.194556 | orchestrator | 2026-03-02 00:56:38.194560 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-02 00:56:38.194563 | orchestrator | Monday 02 March 2026 00:56:35 +0000 (0:00:00.303) 0:10:26.741 ********** 2026-03-02 00:56:38.194567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:56:38.194571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:56:38.194575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:56:38.194578 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:56:38.194582 | orchestrator | 2026-03-02 00:56:38.194588 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-02 00:56:38.194592 | orchestrator | Monday 02 March 2026 00:56:36 +0000 (0:00:00.533) 0:10:27.275 ********** 2026-03-02 00:56:38.194595 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:56:38.194599 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:56:38.194603 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:56:38.194606 | orchestrator | 2026-03-02 00:56:38.194610 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:56:38.194614 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-02 00:56:38.194618 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-02 00:56:38.194621 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-02 00:56:38.194625 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-02 00:56:38.194629 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-02 00:56:38.194635 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-02 00:56:38.194638 | orchestrator | 2026-03-02 00:56:38.194642 | orchestrator | 2026-03-02 00:56:38.194646 | orchestrator | 2026-03-02 00:56:38.194650 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:56:38.194655 | orchestrator | Monday 02 March 2026 00:56:36 +0000 (0:00:00.211) 0:10:27.486 ********** 2026-03-02 00:56:38.194659 | orchestrator | =============================================================================== 2026-03-02 00:56:38.194663 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 40.82s 2026-03-02 00:56:38.194666 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.17s 2026-03-02 00:56:38.194670 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.31s 2026-03-02 00:56:38.194674 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.33s 2026-03-02 00:56:38.194677 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.07s 2026-03-02 00:56:38.194681 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.81s 2026-03-02 00:56:38.194685 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.17s 2026-03-02 00:56:38.194688 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.49s 2026-03-02 00:56:38.194692 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 11.08s 2026-03-02 00:56:38.194696 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.80s 2026-03-02 00:56:38.194699 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.69s 2026-03-02 00:56:38.194703 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.55s 2026-03-02 00:56:38.194707 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.26s 2026-03-02 00:56:38.194710 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.16s 2026-03-02 00:56:38.194714 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.02s 2026-03-02 00:56:38.194718 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.83s 2026-03-02 00:56:38.194721 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.49s 2026-03-02 00:56:38.194725 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.38s 2026-03-02 00:56:38.194729 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.24s 2026-03-02 00:56:38.194735 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.06s 2026-03-02 00:56:38.194739 | orchestrator | 2026-03-02 00:56:38 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:38.194743 | orchestrator | 2026-03-02 00:56:38 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:38.194746 | orchestrator | 2026-03-02 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:41.239230 | orchestrator | 2026-03-02 00:56:41 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:56:41.241963 | orchestrator | 2026-03-02 00:56:41 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:41.244375 | orchestrator | 2026-03-02 00:56:41 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:41.244433 | orchestrator | 2026-03-02 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:44.288635 | orchestrator | 2026-03-02 00:56:44 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:56:44.290409 | orchestrator | 2026-03-02 00:56:44 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:44.293023 | orchestrator | 2026-03-02 00:56:44 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:44.293081 | orchestrator | 2026-03-02 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:47.349507 | orchestrator | 2026-03-02 00:56:47 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:56:47.351528 | orchestrator | 2026-03-02 00:56:47 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:47.352479 | orchestrator | 2026-03-02 00:56:47 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:47.352532 | orchestrator | 2026-03-02 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:50.410894 | orchestrator | 2026-03-02 00:56:50 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:56:50.412472 | orchestrator | 2026-03-02 00:56:50 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:50.414215 | orchestrator | 2026-03-02 00:56:50 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:50.414337 | orchestrator | 2026-03-02 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:53.479087 | orchestrator | 2026-03-02 00:56:53 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:56:53.479226 | orchestrator | 2026-03-02 00:56:53 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:53.480076 | orchestrator | 2026-03-02 00:56:53 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:53.480111 | orchestrator | 2026-03-02 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:56.531328 | orchestrator | 2026-03-02 00:56:56 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:56:56.532419 | orchestrator | 2026-03-02 00:56:56 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:56.535925 | orchestrator | 2026-03-02 00:56:56 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:56.535982 | orchestrator | 2026-03-02 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:56:59.596091 | orchestrator | 2026-03-02 00:56:59 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:56:59.596999 | orchestrator | 2026-03-02 00:56:59 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:56:59.599362 | orchestrator | 2026-03-02 00:56:59 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:56:59.599407 | orchestrator | 2026-03-02 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:02.645615 | orchestrator | 2026-03-02 00:57:02 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:02.649536 | orchestrator | 2026-03-02 00:57:02 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state STARTED 2026-03-02 00:57:02.651341 | orchestrator | 2026-03-02 00:57:02 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:57:02.651392 | orchestrator | 2026-03-02 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:05.691597 | orchestrator | 2026-03-02 00:57:05 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:05.695827 | orchestrator | 2026-03-02 00:57:05 | INFO  | Task 927c50ad-4b68-44de-9155-250b3b104553 is in state SUCCESS 2026-03-02 00:57:05.696517 | orchestrator | 2026-03-02 00:57:05.696566 | orchestrator | 2026-03-02 00:57:05.696573 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:57:05.696578 | orchestrator | 2026-03-02 00:57:05.696582 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 00:57:05.696590 | orchestrator | Monday 02 March 2026 00:54:36 +0000 (0:00:00.250) 0:00:00.250 ********** 2026-03-02 00:57:05.696594 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:05.696599 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:57:05.696604 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:57:05.696608 | orchestrator | 2026-03-02 00:57:05.696612 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 00:57:05.696617 | orchestrator | Monday 02 March 2026 00:54:37 +0000 (0:00:00.259) 0:00:00.510 ********** 2026-03-02 00:57:05.696621 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-02 00:57:05.696626 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-02 00:57:05.696630 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-02 00:57:05.696634 | orchestrator | 2026-03-02 00:57:05.696638 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-02 00:57:05.696642 | orchestrator | 2026-03-02 00:57:05.696646 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-02 00:57:05.696650 | orchestrator | Monday 02 March 2026 00:54:37 +0000 (0:00:00.408) 0:00:00.918 ********** 2026-03-02 00:57:05.696654 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:57:05.696748 | orchestrator | 2026-03-02 00:57:05.696759 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-02 00:57:05.696763 | orchestrator | Monday 02 March 2026 00:54:37 +0000 (0:00:00.479) 0:00:01.398 ********** 2026-03-02 00:57:05.696767 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-02 00:57:05.696772 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-02 00:57:05.696785 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-02 00:57:05.696789 | orchestrator | 2026-03-02 00:57:05.696793 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-02 00:57:05.696796 | orchestrator | Monday 02 March 2026 00:54:39 +0000 (0:00:01.678) 0:00:03.077 ********** 2026-03-02 00:57:05.696815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.696842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.696858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.696868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.696877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.696894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.696902 | orchestrator | 2026-03-02 00:57:05.696908 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-02 00:57:05.696915 | orchestrator | Monday 02 March 2026 00:54:41 +0000 (0:00:02.084) 0:00:05.161 ********** 2026-03-02 00:57:05.696920 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:57:05.696924 | orchestrator | 2026-03-02 00:57:05.696928 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-02 00:57:05.696932 | orchestrator | Monday 02 March 2026 00:54:42 +0000 (0:00:00.531) 0:00:05.693 ********** 2026-03-02 00:57:05.696943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.696948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.696952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.696964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.696972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.696977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.696982 | orchestrator | 2026-03-02 00:57:05.696986 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-02 00:57:05.696993 | orchestrator | Monday 02 March 2026 00:54:45 +0000 (0:00:02.890) 0:00:08.583 ********** 2026-03-02 00:57:05.696997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-02 00:57:05.697005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-02 00:57:05.697011 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:05.697018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-02 00:57:05.697028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-02 00:57:05.697039 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:05.697046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-02 00:57:05.697055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-02 00:57:05.697062 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:05.697069 | orchestrator | 2026-03-02 00:57:05.697075 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-02 00:57:05.697082 | orchestrator | Monday 02 March 2026 00:54:46 +0000 (0:00:00.949) 0:00:09.533 ********** 2026-03-02 00:57:05.697088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-02 00:57:05.697100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-02 00:57:05.697110 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:05.697114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-02 00:57:05.697121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-02 00:57:05.697126 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:05.697130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-02 00:57:05.697138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-02 00:57:05.697146 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:05.697149 | orchestrator | 2026-03-02 00:57:05.697154 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-02 00:57:05.697157 | orchestrator | Monday 02 March 2026 00:54:46 +0000 (0:00:00.880) 0:00:10.414 ********** 2026-03-02 00:57:05.697161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.697169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.697173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.697181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.697192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.697200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.697204 | orchestrator | 2026-03-02 00:57:05.697208 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-02 00:57:05.697212 | orchestrator | Monday 02 March 2026 00:54:49 +0000 (0:00:02.633) 0:00:13.048 ********** 2026-03-02 00:57:05.697216 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:05.697221 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:57:05.697224 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:57:05.697228 | orchestrator | 2026-03-02 00:57:05.697232 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-02 00:57:05.697236 | orchestrator | Monday 02 March 2026 00:54:51 +0000 (0:00:02.417) 0:00:15.465 ********** 2026-03-02 00:57:05.697240 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:05.697244 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:57:05.697247 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:57:05.697251 | orchestrator | 2026-03-02 00:57:05.697255 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-02 00:57:05.697259 | orchestrator | Monday 02 March 2026 00:54:54 +0000 (0:00:02.619) 0:00:18.084 ********** 2026-03-02 00:57:05.697263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.697274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.697278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-02 00:57:05.697285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.697290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.697298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-02 00:57:05.697305 | orchestrator | 2026-03-02 00:57:05.697309 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-02 00:57:05.697313 | orchestrator | Monday 02 March 2026 00:54:56 +0000 (0:00:01.945) 0:00:20.030 ********** 2026-03-02 00:57:05.697317 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:05.697320 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:05.697324 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:05.697328 | orchestrator | 2026-03-02 00:57:05.697332 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-02 00:57:05.697336 | orchestrator | Monday 02 March 2026 00:54:56 +0000 (0:00:00.383) 0:00:20.413 ********** 2026-03-02 00:57:05.697340 | orchestrator | 2026-03-02 00:57:05.697343 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-02 00:57:05.697347 | orchestrator | Monday 02 March 2026 00:54:56 +0000 (0:00:00.069) 0:00:20.482 ********** 2026-03-02 00:57:05.697351 | orchestrator | 2026-03-02 00:57:05.697355 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-02 00:57:05.697359 | orchestrator | Monday 02 March 2026 00:54:57 +0000 (0:00:00.080) 0:00:20.563 ********** 2026-03-02 00:57:05.697362 | orchestrator | 2026-03-02 00:57:05.697366 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-02 00:57:05.697370 | orchestrator | Monday 02 March 2026 00:54:57 +0000 (0:00:00.127) 0:00:20.691 ********** 2026-03-02 00:57:05.697374 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:05.697378 | orchestrator | 2026-03-02 00:57:05.697381 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-02 00:57:05.697385 | orchestrator | Monday 02 March 2026 00:54:57 +0000 (0:00:00.627) 0:00:21.318 ********** 2026-03-02 00:57:05.697390 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:05.697395 | orchestrator | 2026-03-02 00:57:05.697400 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-02 00:57:05.697404 | orchestrator | Monday 02 March 2026 00:54:58 +0000 (0:00:00.192) 0:00:21.511 ********** 2026-03-02 00:57:05.697409 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:05.697414 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:57:05.697418 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:57:05.697423 | orchestrator | 2026-03-02 00:57:05.697428 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-02 00:57:05.697436 | orchestrator | Monday 02 March 2026 00:55:45 +0000 (0:00:47.597) 0:01:09.108 ********** 2026-03-02 00:57:05.697441 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:05.697445 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:57:05.697450 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:57:05.697454 | orchestrator | 2026-03-02 00:57:05.697458 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-02 00:57:05.697463 | orchestrator | Monday 02 March 2026 00:56:49 +0000 (0:01:04.167) 0:02:13.275 ********** 2026-03-02 00:57:05.697468 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:57:05.697476 | orchestrator | 2026-03-02 00:57:05.697480 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-02 00:57:05.697485 | orchestrator | Monday 02 March 2026 00:56:50 +0000 (0:00:00.695) 0:02:13.971 ********** 2026-03-02 00:57:05.697489 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:05.697494 | orchestrator | 2026-03-02 00:57:05.697499 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-02 00:57:05.697504 | orchestrator | Monday 02 March 2026 00:56:53 +0000 (0:00:02.963) 0:02:16.934 ********** 2026-03-02 00:57:05.697508 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:05.697512 | orchestrator | 2026-03-02 00:57:05.697517 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-02 00:57:05.697521 | orchestrator | Monday 02 March 2026 00:56:55 +0000 (0:00:02.351) 0:02:19.286 ********** 2026-03-02 00:57:05.697526 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:05.697530 | orchestrator | 2026-03-02 00:57:05.697534 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-02 00:57:05.697649 | orchestrator | Monday 02 March 2026 00:56:58 +0000 (0:00:02.631) 0:02:21.917 ********** 2026-03-02 00:57:05.697657 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:05.697664 | orchestrator | 2026-03-02 00:57:05.697671 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-02 00:57:05.697677 | orchestrator | Monday 02 March 2026 00:57:01 +0000 (0:00:03.229) 0:02:25.146 ********** 2026-03-02 00:57:05.697683 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:05.697690 | orchestrator | 2026-03-02 00:57:05.697713 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:57:05.697724 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-02 00:57:05.697732 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-02 00:57:05.697746 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-02 00:57:05.697754 | orchestrator | 2026-03-02 00:57:05.697758 | orchestrator | 2026-03-02 00:57:05.697762 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:57:05.697766 | orchestrator | Monday 02 March 2026 00:57:04 +0000 (0:00:02.996) 0:02:28.142 ********** 2026-03-02 00:57:05.697770 | orchestrator | =============================================================================== 2026-03-02 00:57:05.697774 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 64.17s 2026-03-02 00:57:05.697778 | orchestrator | opensearch : Restart opensearch container ------------------------------ 47.60s 2026-03-02 00:57:05.697782 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.23s 2026-03-02 00:57:05.697786 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.00s 2026-03-02 00:57:05.697790 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.96s 2026-03-02 00:57:05.697794 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.89s 2026-03-02 00:57:05.697798 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.63s 2026-03-02 00:57:05.697802 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.63s 2026-03-02 00:57:05.697806 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.62s 2026-03-02 00:57:05.697810 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.42s 2026-03-02 00:57:05.697816 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.35s 2026-03-02 00:57:05.697822 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.08s 2026-03-02 00:57:05.697828 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.95s 2026-03-02 00:57:05.697842 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.68s 2026-03-02 00:57:05.697848 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.95s 2026-03-02 00:57:05.697854 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.88s 2026-03-02 00:57:05.697861 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.70s 2026-03-02 00:57:05.697867 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.63s 2026-03-02 00:57:05.697873 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-03-02 00:57:05.697878 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2026-03-02 00:57:05.697884 | orchestrator | 2026-03-02 00:57:05 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:57:05.697892 | orchestrator | 2026-03-02 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:08.741093 | orchestrator | 2026-03-02 00:57:08 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:08.743216 | orchestrator | 2026-03-02 00:57:08 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:57:08.744013 | orchestrator | 2026-03-02 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:11.791651 | orchestrator | 2026-03-02 00:57:11 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:11.793208 | orchestrator | 2026-03-02 00:57:11 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:57:11.793272 | orchestrator | 2026-03-02 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:14.839404 | orchestrator | 2026-03-02 00:57:14 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:14.841787 | orchestrator | 2026-03-02 00:57:14 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:57:14.841868 | orchestrator | 2026-03-02 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:17.891243 | orchestrator | 2026-03-02 00:57:17 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:17.893492 | orchestrator | 2026-03-02 00:57:17 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:57:17.893550 | orchestrator | 2026-03-02 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:20.941222 | orchestrator | 2026-03-02 00:57:20 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:20.943242 | orchestrator | 2026-03-02 00:57:20 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:57:20.943336 | orchestrator | 2026-03-02 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:23.981162 | orchestrator | 2026-03-02 00:57:23 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:23.981834 | orchestrator | 2026-03-02 00:57:23 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:57:23.981876 | orchestrator | 2026-03-02 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:27.024400 | orchestrator | 2026-03-02 00:57:27 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:27.027655 | orchestrator | 2026-03-02 00:57:27 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:57:27.027760 | orchestrator | 2026-03-02 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:30.074298 | orchestrator | 2026-03-02 00:57:30 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:30.075624 | orchestrator | 2026-03-02 00:57:30 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:57:30.076113 | orchestrator | 2026-03-02 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:33.123519 | orchestrator | 2026-03-02 00:57:33 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:33.125333 | orchestrator | 2026-03-02 00:57:33 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state STARTED 2026-03-02 00:57:33.125403 | orchestrator | 2026-03-02 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:36.168402 | orchestrator | 2026-03-02 00:57:36 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:36.170111 | orchestrator | 2026-03-02 00:57:36 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:57:36.173805 | orchestrator | 2026-03-02 00:57:36 | INFO  | Task 79e9ca49-d118-4996-9c4b-b3eb56d1edf3 is in state SUCCESS 2026-03-02 00:57:36.175893 | orchestrator | 2026-03-02 00:57:36.175934 | orchestrator | 2026-03-02 00:57:36.175940 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-02 00:57:36.175946 | orchestrator | 2026-03-02 00:57:36.175950 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-02 00:57:36.175954 | orchestrator | Monday 02 March 2026 00:54:36 +0000 (0:00:00.080) 0:00:00.080 ********** 2026-03-02 00:57:36.175959 | orchestrator | ok: [localhost] => { 2026-03-02 00:57:36.175964 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-02 00:57:36.175969 | orchestrator | } 2026-03-02 00:57:36.175973 | orchestrator | 2026-03-02 00:57:36.175977 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-02 00:57:36.175981 | orchestrator | Monday 02 March 2026 00:54:36 +0000 (0:00:00.055) 0:00:00.135 ********** 2026-03-02 00:57:36.175985 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-02 00:57:36.175990 | orchestrator | ...ignoring 2026-03-02 00:57:36.175994 | orchestrator | 2026-03-02 00:57:36.176017 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-02 00:57:36.176021 | orchestrator | Monday 02 March 2026 00:54:39 +0000 (0:00:02.887) 0:00:03.023 ********** 2026-03-02 00:57:36.176025 | orchestrator | skipping: [localhost] 2026-03-02 00:57:36.176029 | orchestrator | 2026-03-02 00:57:36.176033 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-02 00:57:36.176037 | orchestrator | Monday 02 March 2026 00:54:39 +0000 (0:00:00.047) 0:00:03.071 ********** 2026-03-02 00:57:36.176040 | orchestrator | ok: [localhost] 2026-03-02 00:57:36.176044 | orchestrator | 2026-03-02 00:57:36.176048 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:57:36.176052 | orchestrator | 2026-03-02 00:57:36.176056 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 00:57:36.176060 | orchestrator | Monday 02 March 2026 00:54:39 +0000 (0:00:00.148) 0:00:03.220 ********** 2026-03-02 00:57:36.176064 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:36.176068 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:57:36.176071 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:57:36.176075 | orchestrator | 2026-03-02 00:57:36.176079 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 00:57:36.176083 | orchestrator | Monday 02 March 2026 00:54:40 +0000 (0:00:00.376) 0:00:03.596 ********** 2026-03-02 00:57:36.176086 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-02 00:57:36.176091 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-02 00:57:36.176095 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-02 00:57:36.176099 | orchestrator | 2026-03-02 00:57:36.176117 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-02 00:57:36.176122 | orchestrator | 2026-03-02 00:57:36.176125 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-02 00:57:36.176129 | orchestrator | Monday 02 March 2026 00:54:40 +0000 (0:00:00.659) 0:00:04.255 ********** 2026-03-02 00:57:36.176133 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-02 00:57:36.176137 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-02 00:57:36.176141 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-02 00:57:36.176144 | orchestrator | 2026-03-02 00:57:36.176148 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-02 00:57:36.176152 | orchestrator | Monday 02 March 2026 00:54:40 +0000 (0:00:00.331) 0:00:04.587 ********** 2026-03-02 00:57:36.176156 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:57:36.176161 | orchestrator | 2026-03-02 00:57:36.176165 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-02 00:57:36.176168 | orchestrator | Monday 02 March 2026 00:54:41 +0000 (0:00:00.486) 0:00:05.074 ********** 2026-03-02 00:57:36.176185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-02 00:57:36.176318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-02 00:57:36.176383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-02 00:57:36.176390 | orchestrator | 2026-03-02 00:57:36.176398 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-02 00:57:36.176403 | orchestrator | Monday 02 March 2026 00:54:44 +0000 (0:00:02.805) 0:00:07.879 ********** 2026-03-02 00:57:36.176406 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.176411 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.176415 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.176418 | orchestrator | 2026-03-02 00:57:36.176422 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-02 00:57:36.176426 | orchestrator | Monday 02 March 2026 00:54:44 +0000 (0:00:00.616) 0:00:08.495 ********** 2026-03-02 00:57:36.176430 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.176434 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.176437 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.176441 | orchestrator | 2026-03-02 00:57:36.176445 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-02 00:57:36.176449 | orchestrator | Monday 02 March 2026 00:54:46 +0000 (0:00:01.511) 0:00:10.007 ********** 2026-03-02 00:57:36.176456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-02 00:57:36.176469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-02 00:57:36.176480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-02 00:57:36.176491 | orchestrator | 2026-03-02 00:57:36.176497 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-02 00:57:36.176503 | orchestrator | Monday 02 March 2026 00:54:49 +0000 (0:00:03.545) 0:00:13.553 ********** 2026-03-02 00:57:36.176510 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.176517 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.176521 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.176524 | orchestrator | 2026-03-02 00:57:36.176528 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-02 00:57:36.176532 | orchestrator | Monday 02 March 2026 00:54:51 +0000 (0:00:01.161) 0:00:14.715 ********** 2026-03-02 00:57:36.176535 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:57:36.176539 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:57:36.176543 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.176546 | orchestrator | 2026-03-02 00:57:36.176550 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-02 00:57:36.176554 | orchestrator | Monday 02 March 2026 00:54:55 +0000 (0:00:04.469) 0:00:19.184 ********** 2026-03-02 00:57:36.176558 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:57:36.176561 | orchestrator | 2026-03-02 00:57:36.176565 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-02 00:57:36.176569 | orchestrator | Monday 02 March 2026 00:54:56 +0000 (0:00:00.451) 0:00:19.635 ********** 2026-03-02 00:57:36.176577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:57:36.176587 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.176594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:57:36.176599 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:36.176606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:57:36.176614 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.176618 | orchestrator | 2026-03-02 00:57:36.176622 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-02 00:57:36.176625 | orchestrator | Monday 02 March 2026 00:54:58 +0000 (0:00:02.181) 0:00:21.816 ********** 2026-03-02 00:57:36.176632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:57:36.176636 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.176684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:57:36.176693 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.176700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:57:36.176704 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:36.176708 | orchestrator | 2026-03-02 00:57:36.176711 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-02 00:57:36.176715 | orchestrator | Monday 02 March 2026 00:55:01 +0000 (0:00:02.995) 0:00:24.812 ********** 2026-03-02 00:57:36.176719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:57:36.176731 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:36.176741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:57:36.176745 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.176749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-02 00:57:36.176754 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.176757 | orchestrator | 2026-03-02 00:57:36.176765 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-02 00:57:36.176769 | orchestrator | Monday 02 March 2026 00:55:04 +0000 (0:00:03.002) 0:00:27.814 ********** 2026-03-02 00:57:36.176781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-02 00:57:36.176786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-02 00:57:36.176796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-02 00:57:36.176804 | orchestrator | 2026-03-02 00:57:36.176808 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-02 00:57:36.176812 | orchestrator | Monday 02 March 2026 00:55:07 +0000 (0:00:02.820) 0:00:30.634 ********** 2026-03-02 00:57:36.176815 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.176819 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:57:36.176823 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:57:36.176827 | orchestrator | 2026-03-02 00:57:36.176830 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-02 00:57:36.176834 | orchestrator | Monday 02 March 2026 00:55:08 +0000 (0:00:01.061) 0:00:31.696 ********** 2026-03-02 00:57:36.176838 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:36.176842 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:57:36.176846 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:57:36.176849 | orchestrator | 2026-03-02 00:57:36.176853 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-02 00:57:36.176857 | orchestrator | Monday 02 March 2026 00:55:08 +0000 (0:00:00.449) 0:00:32.146 ********** 2026-03-02 00:57:36.176861 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:36.176864 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:57:36.176868 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:57:36.176872 | orchestrator | 2026-03-02 00:57:36.176875 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-02 00:57:36.176879 | orchestrator | Monday 02 March 2026 00:55:09 +0000 (0:00:00.563) 0:00:32.709 ********** 2026-03-02 00:57:36.176884 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-02 00:57:36.176888 | orchestrator | ...ignoring 2026-03-02 00:57:36.176892 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-02 00:57:36.176895 | orchestrator | ...ignoring 2026-03-02 00:57:36.176899 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-02 00:57:36.176903 | orchestrator | ...ignoring 2026-03-02 00:57:36.176906 | orchestrator | 2026-03-02 00:57:36.176910 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-02 00:57:36.176918 | orchestrator | Monday 02 March 2026 00:55:20 +0000 (0:00:10.962) 0:00:43.671 ********** 2026-03-02 00:57:36.176922 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:36.176925 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:57:36.176929 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:57:36.176933 | orchestrator | 2026-03-02 00:57:36.176936 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-02 00:57:36.176940 | orchestrator | Monday 02 March 2026 00:55:20 +0000 (0:00:00.524) 0:00:44.196 ********** 2026-03-02 00:57:36.176944 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:36.176948 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.176951 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.176955 | orchestrator | 2026-03-02 00:57:36.176959 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-02 00:57:36.176963 | orchestrator | Monday 02 March 2026 00:55:21 +0000 (0:00:00.641) 0:00:44.837 ********** 2026-03-02 00:57:36.176966 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:36.176970 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.176974 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.176978 | orchestrator | 2026-03-02 00:57:36.176981 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-02 00:57:36.176985 | orchestrator | Monday 02 March 2026 00:55:21 +0000 (0:00:00.447) 0:00:45.285 ********** 2026-03-02 00:57:36.176989 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:36.176992 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.176996 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.177000 | orchestrator | 2026-03-02 00:57:36.177004 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-02 00:57:36.177007 | orchestrator | Monday 02 March 2026 00:55:22 +0000 (0:00:00.432) 0:00:45.717 ********** 2026-03-02 00:57:36.177011 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:36.177015 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:57:36.177018 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:57:36.177022 | orchestrator | 2026-03-02 00:57:36.177026 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-02 00:57:36.177031 | orchestrator | Monday 02 March 2026 00:55:22 +0000 (0:00:00.417) 0:00:46.135 ********** 2026-03-02 00:57:36.177037 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:36.177042 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.177046 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.177050 | orchestrator | 2026-03-02 00:57:36.177055 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-02 00:57:36.177059 | orchestrator | Monday 02 March 2026 00:55:23 +0000 (0:00:00.799) 0:00:46.935 ********** 2026-03-02 00:57:36.177063 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.177068 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.177072 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-02 00:57:36.177077 | orchestrator | 2026-03-02 00:57:36.177081 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-02 00:57:36.177086 | orchestrator | Monday 02 March 2026 00:55:23 +0000 (0:00:00.375) 0:00:47.310 ********** 2026-03-02 00:57:36.177090 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.177094 | orchestrator | 2026-03-02 00:57:36.177099 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-02 00:57:36.177106 | orchestrator | Monday 02 March 2026 00:55:33 +0000 (0:00:09.373) 0:00:56.683 ********** 2026-03-02 00:57:36.177110 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:36.177114 | orchestrator | 2026-03-02 00:57:36.177119 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-02 00:57:36.177123 | orchestrator | Monday 02 March 2026 00:55:33 +0000 (0:00:00.134) 0:00:56.818 ********** 2026-03-02 00:57:36.177127 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:36.177131 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.177136 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.177143 | orchestrator | 2026-03-02 00:57:36.177148 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-02 00:57:36.177152 | orchestrator | Monday 02 March 2026 00:55:34 +0000 (0:00:00.986) 0:00:57.805 ********** 2026-03-02 00:57:36.177156 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.177161 | orchestrator | 2026-03-02 00:57:36.177165 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-02 00:57:36.177169 | orchestrator | Monday 02 March 2026 00:55:40 +0000 (0:00:06.779) 0:01:04.585 ********** 2026-03-02 00:57:36.177174 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2026-03-02 00:57:36.177178 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:36.177183 | orchestrator | 2026-03-02 00:57:36.177187 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-02 00:57:36.177191 | orchestrator | Monday 02 March 2026 00:55:48 +0000 (0:00:07.225) 0:01:11.811 ********** 2026-03-02 00:57:36.177196 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:36.177200 | orchestrator | 2026-03-02 00:57:36.177204 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-02 00:57:36.177208 | orchestrator | Monday 02 March 2026 00:55:50 +0000 (0:00:02.638) 0:01:14.449 ********** 2026-03-02 00:57:36.177213 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.177217 | orchestrator | 2026-03-02 00:57:36.177221 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-02 00:57:36.177226 | orchestrator | Monday 02 March 2026 00:55:50 +0000 (0:00:00.112) 0:01:14.562 ********** 2026-03-02 00:57:36.177230 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:36.177234 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.177239 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.177243 | orchestrator | 2026-03-02 00:57:36.177247 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-02 00:57:36.177251 | orchestrator | Monday 02 March 2026 00:55:51 +0000 (0:00:00.302) 0:01:14.864 ********** 2026-03-02 00:57:36.177256 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:36.177260 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:57:36.177264 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:57:36.177268 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-02 00:57:36.177273 | orchestrator | 2026-03-02 00:57:36.177277 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-02 00:57:36.177281 | orchestrator | skipping: no hosts matched 2026-03-02 00:57:36.177285 | orchestrator | 2026-03-02 00:57:36.177290 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-02 00:57:36.177294 | orchestrator | 2026-03-02 00:57:36.177298 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-02 00:57:36.177303 | orchestrator | Monday 02 March 2026 00:55:51 +0000 (0:00:00.499) 0:01:15.364 ********** 2026-03-02 00:57:36.177307 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:57:36.177312 | orchestrator | 2026-03-02 00:57:36.177316 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-02 00:57:36.177320 | orchestrator | Monday 02 March 2026 00:56:12 +0000 (0:00:20.544) 0:01:35.908 ********** 2026-03-02 00:57:36.177324 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:57:36.177329 | orchestrator | 2026-03-02 00:57:36.177333 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-02 00:57:36.177338 | orchestrator | Monday 02 March 2026 00:56:22 +0000 (0:00:10.583) 0:01:46.492 ********** 2026-03-02 00:57:36.177342 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:57:36.177347 | orchestrator | 2026-03-02 00:57:36.177351 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-02 00:57:36.177355 | orchestrator | 2026-03-02 00:57:36.177360 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-02 00:57:36.177364 | orchestrator | Monday 02 March 2026 00:56:25 +0000 (0:00:02.479) 0:01:48.971 ********** 2026-03-02 00:57:36.177371 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:57:36.177375 | orchestrator | 2026-03-02 00:57:36.177380 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-02 00:57:36.177384 | orchestrator | Monday 02 March 2026 00:56:41 +0000 (0:00:16.108) 0:02:05.080 ********** 2026-03-02 00:57:36.177389 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:57:36.177394 | orchestrator | 2026-03-02 00:57:36.177398 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-02 00:57:36.177401 | orchestrator | Monday 02 March 2026 00:56:57 +0000 (0:00:15.609) 0:02:20.689 ********** 2026-03-02 00:57:36.177408 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:57:36.177411 | orchestrator | 2026-03-02 00:57:36.177415 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-02 00:57:36.177419 | orchestrator | 2026-03-02 00:57:36.177423 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-02 00:57:36.177426 | orchestrator | Monday 02 March 2026 00:56:59 +0000 (0:00:02.661) 0:02:23.351 ********** 2026-03-02 00:57:36.177430 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.177434 | orchestrator | 2026-03-02 00:57:36.177438 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-02 00:57:36.177441 | orchestrator | Monday 02 March 2026 00:57:16 +0000 (0:00:17.018) 0:02:40.370 ********** 2026-03-02 00:57:36.177445 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:36.177449 | orchestrator | 2026-03-02 00:57:36.177453 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-02 00:57:36.177456 | orchestrator | Monday 02 March 2026 00:57:17 +0000 (0:00:00.617) 0:02:40.987 ********** 2026-03-02 00:57:36.177460 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:36.177464 | orchestrator | 2026-03-02 00:57:36.177468 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-02 00:57:36.177471 | orchestrator | 2026-03-02 00:57:36.177478 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-02 00:57:36.177482 | orchestrator | Monday 02 March 2026 00:57:20 +0000 (0:00:02.688) 0:02:43.676 ********** 2026-03-02 00:57:36.177485 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:57:36.177489 | orchestrator | 2026-03-02 00:57:36.177493 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-02 00:57:36.177497 | orchestrator | Monday 02 March 2026 00:57:20 +0000 (0:00:00.524) 0:02:44.201 ********** 2026-03-02 00:57:36.177500 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.177504 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.177508 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.177511 | orchestrator | 2026-03-02 00:57:36.177515 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-02 00:57:36.177519 | orchestrator | Monday 02 March 2026 00:57:23 +0000 (0:00:02.825) 0:02:47.027 ********** 2026-03-02 00:57:36.177523 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.177526 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.177530 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.177534 | orchestrator | 2026-03-02 00:57:36.177537 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-02 00:57:36.177541 | orchestrator | Monday 02 March 2026 00:57:26 +0000 (0:00:02.701) 0:02:49.728 ********** 2026-03-02 00:57:36.177545 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.177549 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.177552 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.177556 | orchestrator | 2026-03-02 00:57:36.177560 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-02 00:57:36.177563 | orchestrator | Monday 02 March 2026 00:57:28 +0000 (0:00:02.402) 0:02:52.130 ********** 2026-03-02 00:57:36.177567 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.177571 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.177575 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:57:36.177578 | orchestrator | 2026-03-02 00:57:36.177585 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-02 00:57:36.177589 | orchestrator | Monday 02 March 2026 00:57:30 +0000 (0:00:02.090) 0:02:54.221 ********** 2026-03-02 00:57:36.177593 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:57:36.177596 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:57:36.177600 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:57:36.177604 | orchestrator | 2026-03-02 00:57:36.177608 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-02 00:57:36.177611 | orchestrator | Monday 02 March 2026 00:57:33 +0000 (0:00:03.036) 0:02:57.258 ********** 2026-03-02 00:57:36.177615 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:57:36.177619 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:57:36.177623 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:57:36.177626 | orchestrator | 2026-03-02 00:57:36.177630 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:57:36.177634 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-02 00:57:36.177638 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-02 00:57:36.177658 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-02 00:57:36.177664 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-02 00:57:36.177670 | orchestrator | 2026-03-02 00:57:36.177676 | orchestrator | 2026-03-02 00:57:36.177681 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:57:36.177687 | orchestrator | Monday 02 March 2026 00:57:33 +0000 (0:00:00.221) 0:02:57.479 ********** 2026-03-02 00:57:36.177693 | orchestrator | =============================================================================== 2026-03-02 00:57:36.177707 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 36.65s 2026-03-02 00:57:36.177719 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.19s 2026-03-02 00:57:36.177726 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.02s 2026-03-02 00:57:36.177731 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.96s 2026-03-02 00:57:36.177735 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.37s 2026-03-02 00:57:36.177742 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.23s 2026-03-02 00:57:36.177746 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.78s 2026-03-02 00:57:36.177749 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.14s 2026-03-02 00:57:36.177753 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.47s 2026-03-02 00:57:36.177757 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.55s 2026-03-02 00:57:36.177761 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.04s 2026-03-02 00:57:36.177764 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.00s 2026-03-02 00:57:36.177768 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.00s 2026-03-02 00:57:36.177772 | orchestrator | Check MariaDB service --------------------------------------------------- 2.89s 2026-03-02 00:57:36.177775 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.83s 2026-03-02 00:57:36.177782 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.82s 2026-03-02 00:57:36.177786 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.81s 2026-03-02 00:57:36.177790 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.70s 2026-03-02 00:57:36.177797 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.69s 2026-03-02 00:57:36.177801 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.64s 2026-03-02 00:57:36.177804 | orchestrator | 2026-03-02 00:57:36 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:57:36.177808 | orchestrator | 2026-03-02 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:39.235397 | orchestrator | 2026-03-02 00:57:39 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:39.237353 | orchestrator | 2026-03-02 00:57:39 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:57:39.238882 | orchestrator | 2026-03-02 00:57:39 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:57:39.240360 | orchestrator | 2026-03-02 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:42.288462 | orchestrator | 2026-03-02 00:57:42 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:42.291987 | orchestrator | 2026-03-02 00:57:42 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:57:42.294371 | orchestrator | 2026-03-02 00:57:42 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:57:42.294691 | orchestrator | 2026-03-02 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:45.335727 | orchestrator | 2026-03-02 00:57:45 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:45.335805 | orchestrator | 2026-03-02 00:57:45 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:57:45.336286 | orchestrator | 2026-03-02 00:57:45 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:57:45.336319 | orchestrator | 2026-03-02 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:48.375605 | orchestrator | 2026-03-02 00:57:48 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:48.377176 | orchestrator | 2026-03-02 00:57:48 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:57:48.377247 | orchestrator | 2026-03-02 00:57:48 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:57:48.377268 | orchestrator | 2026-03-02 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:51.418197 | orchestrator | 2026-03-02 00:57:51 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:51.419984 | orchestrator | 2026-03-02 00:57:51 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:57:51.422133 | orchestrator | 2026-03-02 00:57:51 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:57:51.422230 | orchestrator | 2026-03-02 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:54.461562 | orchestrator | 2026-03-02 00:57:54 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:54.461779 | orchestrator | 2026-03-02 00:57:54 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:57:54.466127 | orchestrator | 2026-03-02 00:57:54 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:57:54.466200 | orchestrator | 2026-03-02 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:57:57.497519 | orchestrator | 2026-03-02 00:57:57 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:57:57.497995 | orchestrator | 2026-03-02 00:57:57 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:57:57.499219 | orchestrator | 2026-03-02 00:57:57 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:57:57.499260 | orchestrator | 2026-03-02 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:00.529800 | orchestrator | 2026-03-02 00:58:00 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:00.530399 | orchestrator | 2026-03-02 00:58:00 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:00.531413 | orchestrator | 2026-03-02 00:58:00 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:00.531431 | orchestrator | 2026-03-02 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:03.571210 | orchestrator | 2026-03-02 00:58:03 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:03.571371 | orchestrator | 2026-03-02 00:58:03 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:03.572688 | orchestrator | 2026-03-02 00:58:03 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:03.572735 | orchestrator | 2026-03-02 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:06.614662 | orchestrator | 2026-03-02 00:58:06 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:06.615731 | orchestrator | 2026-03-02 00:58:06 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:06.616748 | orchestrator | 2026-03-02 00:58:06 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:06.616803 | orchestrator | 2026-03-02 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:09.657200 | orchestrator | 2026-03-02 00:58:09 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:09.659208 | orchestrator | 2026-03-02 00:58:09 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:09.661359 | orchestrator | 2026-03-02 00:58:09 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:09.661416 | orchestrator | 2026-03-02 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:12.704003 | orchestrator | 2026-03-02 00:58:12 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:12.705679 | orchestrator | 2026-03-02 00:58:12 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:12.708677 | orchestrator | 2026-03-02 00:58:12 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:12.708841 | orchestrator | 2026-03-02 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:15.750387 | orchestrator | 2026-03-02 00:58:15 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:15.751944 | orchestrator | 2026-03-02 00:58:15 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:15.754194 | orchestrator | 2026-03-02 00:58:15 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:15.754285 | orchestrator | 2026-03-02 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:18.794351 | orchestrator | 2026-03-02 00:58:18 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:18.794543 | orchestrator | 2026-03-02 00:58:18 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:18.795614 | orchestrator | 2026-03-02 00:58:18 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:18.795653 | orchestrator | 2026-03-02 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:21.836853 | orchestrator | 2026-03-02 00:58:21 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:21.838605 | orchestrator | 2026-03-02 00:58:21 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:21.840927 | orchestrator | 2026-03-02 00:58:21 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:21.840975 | orchestrator | 2026-03-02 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:24.882000 | orchestrator | 2026-03-02 00:58:24 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:24.885105 | orchestrator | 2026-03-02 00:58:24 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:24.885890 | orchestrator | 2026-03-02 00:58:24 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:24.885925 | orchestrator | 2026-03-02 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:27.926347 | orchestrator | 2026-03-02 00:58:27 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:27.927860 | orchestrator | 2026-03-02 00:58:27 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:27.929613 | orchestrator | 2026-03-02 00:58:27 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:27.929668 | orchestrator | 2026-03-02 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:30.973444 | orchestrator | 2026-03-02 00:58:30 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:30.974838 | orchestrator | 2026-03-02 00:58:30 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:30.975864 | orchestrator | 2026-03-02 00:58:30 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:30.975909 | orchestrator | 2026-03-02 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:34.021412 | orchestrator | 2026-03-02 00:58:34 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:34.023506 | orchestrator | 2026-03-02 00:58:34 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:34.025384 | orchestrator | 2026-03-02 00:58:34 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:34.025453 | orchestrator | 2026-03-02 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:37.080773 | orchestrator | 2026-03-02 00:58:37 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:37.085352 | orchestrator | 2026-03-02 00:58:37 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:37.085398 | orchestrator | 2026-03-02 00:58:37 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:37.085403 | orchestrator | 2026-03-02 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:40.126924 | orchestrator | 2026-03-02 00:58:40 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:40.128467 | orchestrator | 2026-03-02 00:58:40 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:40.130344 | orchestrator | 2026-03-02 00:58:40 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:40.130405 | orchestrator | 2026-03-02 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:43.174238 | orchestrator | 2026-03-02 00:58:43 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:43.175369 | orchestrator | 2026-03-02 00:58:43 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:43.176672 | orchestrator | 2026-03-02 00:58:43 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:43.176704 | orchestrator | 2026-03-02 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:46.220697 | orchestrator | 2026-03-02 00:58:46 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:46.222241 | orchestrator | 2026-03-02 00:58:46 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:46.224536 | orchestrator | 2026-03-02 00:58:46 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:46.224598 | orchestrator | 2026-03-02 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:49.267262 | orchestrator | 2026-03-02 00:58:49 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state STARTED 2026-03-02 00:58:49.268555 | orchestrator | 2026-03-02 00:58:49 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:49.270741 | orchestrator | 2026-03-02 00:58:49 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:49.270791 | orchestrator | 2026-03-02 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:52.319654 | orchestrator | 2026-03-02 00:58:52 | INFO  | Task fbfb5221-51e7-4938-a420-7426216897f5 is in state SUCCESS 2026-03-02 00:58:52.320892 | orchestrator | 2026-03-02 00:58:52.320940 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-02 00:58:52.320947 | orchestrator | 2.16.14 2026-03-02 00:58:52.320952 | orchestrator | 2026-03-02 00:58:52.320957 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-02 00:58:52.320962 | orchestrator | 2026-03-02 00:58:52.320966 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-02 00:58:52.320971 | orchestrator | Monday 02 March 2026 00:56:41 +0000 (0:00:00.633) 0:00:00.633 ********** 2026-03-02 00:58:52.320975 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:58:52.320980 | orchestrator | 2026-03-02 00:58:52.320984 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-02 00:58:52.320999 | orchestrator | Monday 02 March 2026 00:56:42 +0000 (0:00:00.616) 0:00:01.250 ********** 2026-03-02 00:58:52.321004 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.321008 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.321012 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.321015 | orchestrator | 2026-03-02 00:58:52.321019 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-02 00:58:52.321031 | orchestrator | Monday 02 March 2026 00:56:43 +0000 (0:00:00.651) 0:00:01.901 ********** 2026-03-02 00:58:52.321036 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.321039 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.321043 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.321054 | orchestrator | 2026-03-02 00:58:52.321058 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-02 00:58:52.321062 | orchestrator | Monday 02 March 2026 00:56:43 +0000 (0:00:00.309) 0:00:02.211 ********** 2026-03-02 00:58:52.321066 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.321070 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.321089 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.321093 | orchestrator | 2026-03-02 00:58:52.321097 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-02 00:58:52.321100 | orchestrator | Monday 02 March 2026 00:56:44 +0000 (0:00:00.870) 0:00:03.081 ********** 2026-03-02 00:58:52.321104 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.321108 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.321112 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.321115 | orchestrator | 2026-03-02 00:58:52.321216 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-02 00:58:52.321221 | orchestrator | Monday 02 March 2026 00:56:44 +0000 (0:00:00.301) 0:00:03.383 ********** 2026-03-02 00:58:52.321225 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.321228 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.321357 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.321364 | orchestrator | 2026-03-02 00:58:52.321368 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-02 00:58:52.321371 | orchestrator | Monday 02 March 2026 00:56:44 +0000 (0:00:00.305) 0:00:03.688 ********** 2026-03-02 00:58:52.321375 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.321379 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.321383 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.321387 | orchestrator | 2026-03-02 00:58:52.321391 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-02 00:58:52.321394 | orchestrator | Monday 02 March 2026 00:56:45 +0000 (0:00:00.302) 0:00:03.991 ********** 2026-03-02 00:58:52.321398 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.321403 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.321407 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.321410 | orchestrator | 2026-03-02 00:58:52.321414 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-02 00:58:52.321418 | orchestrator | Monday 02 March 2026 00:56:45 +0000 (0:00:00.482) 0:00:04.474 ********** 2026-03-02 00:58:52.321422 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.321425 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.321429 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.321433 | orchestrator | 2026-03-02 00:58:52.321437 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-02 00:58:52.321441 | orchestrator | Monday 02 March 2026 00:56:45 +0000 (0:00:00.286) 0:00:04.760 ********** 2026-03-02 00:58:52.321445 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-02 00:58:52.321448 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-02 00:58:52.321452 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-02 00:58:52.321456 | orchestrator | 2026-03-02 00:58:52.321460 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-02 00:58:52.321464 | orchestrator | Monday 02 March 2026 00:56:46 +0000 (0:00:00.627) 0:00:05.387 ********** 2026-03-02 00:58:52.321467 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.321471 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.321475 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.321479 | orchestrator | 2026-03-02 00:58:52.321482 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-02 00:58:52.321486 | orchestrator | Monday 02 March 2026 00:56:46 +0000 (0:00:00.457) 0:00:05.845 ********** 2026-03-02 00:58:52.321490 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-02 00:58:52.321547 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-02 00:58:52.321551 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-02 00:58:52.321555 | orchestrator | 2026-03-02 00:58:52.321559 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-02 00:58:52.321563 | orchestrator | Monday 02 March 2026 00:56:49 +0000 (0:00:02.207) 0:00:08.052 ********** 2026-03-02 00:58:52.321574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-02 00:58:52.321615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-02 00:58:52.321619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-02 00:58:52.321623 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.321627 | orchestrator | 2026-03-02 00:58:52.321739 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-02 00:58:52.321748 | orchestrator | Monday 02 March 2026 00:56:49 +0000 (0:00:00.603) 0:00:08.656 ********** 2026-03-02 00:58:52.321753 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.321765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.321770 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.321774 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.321777 | orchestrator | 2026-03-02 00:58:52.321781 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-02 00:58:52.321785 | orchestrator | Monday 02 March 2026 00:56:50 +0000 (0:00:00.830) 0:00:09.487 ********** 2026-03-02 00:58:52.321790 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.321797 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.321801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.321805 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.321809 | orchestrator | 2026-03-02 00:58:52.321812 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-02 00:58:52.321816 | orchestrator | Monday 02 March 2026 00:56:50 +0000 (0:00:00.346) 0:00:09.833 ********** 2026-03-02 00:58:52.321821 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2656f804d8b6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-02 00:56:47.649070', 'end': '2026-03-02 00:56:47.692525', 'delta': '0:00:00.043455', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2656f804d8b6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-02 00:58:52.321833 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd8052155fcae', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-02 00:56:48.438367', 'end': '2026-03-02 00:56:48.476155', 'delta': '0:00:00.037788', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d8052155fcae'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-02 00:58:52.321854 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4d36433fbafc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-02 00:56:48.975374', 'end': '2026-03-02 00:56:49.013751', 'delta': '0:00:00.038377', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4d36433fbafc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-02 00:58:52.321859 | orchestrator | 2026-03-02 00:58:52.321863 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-02 00:58:52.321867 | orchestrator | Monday 02 March 2026 00:56:51 +0000 (0:00:00.186) 0:00:10.020 ********** 2026-03-02 00:58:52.321871 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.321874 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.321878 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.321882 | orchestrator | 2026-03-02 00:58:52.321886 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-02 00:58:52.321889 | orchestrator | Monday 02 March 2026 00:56:51 +0000 (0:00:00.441) 0:00:10.461 ********** 2026-03-02 00:58:52.321893 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-02 00:58:52.321897 | orchestrator | 2026-03-02 00:58:52.321901 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-02 00:58:52.321904 | orchestrator | Monday 02 March 2026 00:56:53 +0000 (0:00:01.784) 0:00:12.246 ********** 2026-03-02 00:58:52.321908 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.321912 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.321916 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.321919 | orchestrator | 2026-03-02 00:58:52.321923 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-02 00:58:52.321927 | orchestrator | Monday 02 March 2026 00:56:53 +0000 (0:00:00.307) 0:00:12.554 ********** 2026-03-02 00:58:52.321931 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.321937 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.321943 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.321949 | orchestrator | 2026-03-02 00:58:52.321955 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-02 00:58:52.321962 | orchestrator | Monday 02 March 2026 00:56:54 +0000 (0:00:00.394) 0:00:12.949 ********** 2026-03-02 00:58:52.321968 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.321974 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.321983 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.321990 | orchestrator | 2026-03-02 00:58:52.321998 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-02 00:58:52.322003 | orchestrator | Monday 02 March 2026 00:56:54 +0000 (0:00:00.452) 0:00:13.401 ********** 2026-03-02 00:58:52.322054 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.322062 | orchestrator | 2026-03-02 00:58:52.322068 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-02 00:58:52.322074 | orchestrator | Monday 02 March 2026 00:56:54 +0000 (0:00:00.131) 0:00:13.532 ********** 2026-03-02 00:58:52.322080 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.322086 | orchestrator | 2026-03-02 00:58:52.322092 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-02 00:58:52.322100 | orchestrator | Monday 02 March 2026 00:56:54 +0000 (0:00:00.222) 0:00:13.754 ********** 2026-03-02 00:58:52.322104 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.322108 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.322111 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.322115 | orchestrator | 2026-03-02 00:58:52.322119 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-02 00:58:52.322123 | orchestrator | Monday 02 March 2026 00:56:55 +0000 (0:00:00.336) 0:00:14.091 ********** 2026-03-02 00:58:52.322127 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.322130 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.322134 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.322138 | orchestrator | 2026-03-02 00:58:52.322141 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-02 00:58:52.322145 | orchestrator | Monday 02 March 2026 00:56:55 +0000 (0:00:00.334) 0:00:14.426 ********** 2026-03-02 00:58:52.322149 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.322152 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.322156 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.322160 | orchestrator | 2026-03-02 00:58:52.322164 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-02 00:58:52.322167 | orchestrator | Monday 02 March 2026 00:56:56 +0000 (0:00:00.530) 0:00:14.957 ********** 2026-03-02 00:58:52.322171 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.322175 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.322179 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.322182 | orchestrator | 2026-03-02 00:58:52.322186 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-02 00:58:52.322190 | orchestrator | Monday 02 March 2026 00:56:56 +0000 (0:00:00.323) 0:00:15.280 ********** 2026-03-02 00:58:52.322194 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.322197 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.322201 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.322205 | orchestrator | 2026-03-02 00:58:52.322209 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-02 00:58:52.322212 | orchestrator | Monday 02 March 2026 00:56:56 +0000 (0:00:00.320) 0:00:15.601 ********** 2026-03-02 00:58:52.322216 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.322220 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.322223 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.322253 | orchestrator | 2026-03-02 00:58:52.322258 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-02 00:58:52.322262 | orchestrator | Monday 02 March 2026 00:56:57 +0000 (0:00:00.318) 0:00:15.919 ********** 2026-03-02 00:58:52.322266 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.322269 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.322273 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.322277 | orchestrator | 2026-03-02 00:58:52.322280 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-02 00:58:52.322284 | orchestrator | Monday 02 March 2026 00:56:57 +0000 (0:00:00.520) 0:00:16.440 ********** 2026-03-02 00:58:52.322293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--271875e3--8908--5e0e--b413--64afee9519da-osd--block--271875e3--8908--5e0e--b413--64afee9519da', 'dm-uuid-LVM-WNv815OLXXMZRQbtiKCV4Kr3DgLyU9EOVlEc6MsXQpd8yGIWlVJyJqw6pfocxWsk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52125f52--6af3--5290--9fed--9584660c39a2-osd--block--52125f52--6af3--5290--9fed--9584660c39a2', 'dm-uuid-LVM-88ZyFcat0F1j3lRWAWVpLMTFRXW0sRdGtrjcGy2UECAST2MzryQfRyInmupddH55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de3a51bd--019b--527a--8dea--ff4c94e5d801-osd--block--de3a51bd--019b--527a--8dea--ff4c94e5d801', 'dm-uuid-LVM-X7cvTTEIT9w4bQ22AYz8LHyJx3B21eeHRkfsquG31QVc2M6iZhDe3TjYXiYe1V7w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a84d633--ba5b--5049--b6da--2482ee8b3083-osd--block--8a84d633--ba5b--5049--b6da--2482ee8b3083', 'dm-uuid-LVM-0raIyonegJuhMDcTKEhe4ST5v39sRW18JPOuu0DO5SvF8e5nlgH3QJUQBYP7184M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--271875e3--8908--5e0e--b413--64afee9519da-osd--block--271875e3--8908--5e0e--b413--64afee9519da'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GibLfR-79z8-CZQQ-AaT6-vyUn-mfC4-7uiS0U', 'scsi-0QEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7', 'scsi-SQEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--52125f52--6af3--5290--9fed--9584660c39a2-osd--block--52125f52--6af3--5290--9fed--9584660c39a2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3udVJJ-Z4dW-1pEk-THR1-4Uh3-fd0B-MD3V3e', 'scsi-0QEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311', 'scsi-SQEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f', 'scsi-SQEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322458 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.322466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de3a51bd--019b--527a--8dea--ff4c94e5d801-osd--block--de3a51bd--019b--527a--8dea--ff4c94e5d801'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QqUOyE-CENj-AKoE-jSXL-82K2-5S2Y-c6X3Nb', 'scsi-0QEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116', 'scsi-SQEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8a84d633--ba5b--5049--b6da--2482ee8b3083-osd--block--8a84d633--ba5b--5049--b6da--2482ee8b3083'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XuzJqd-HbZT-pgPn-w1pW-C0el-OSmi-JJyU7B', 'scsi-0QEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077', 'scsi-SQEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1d64d47--37ed--5019--b7d5--718691437d08-osd--block--c1d64d47--37ed--5019--b7d5--718691437d08', 'dm-uuid-LVM-C7wI1BFuiaw8aSvNbWcEoZ1EQsbpTxTnNfm95Z36zsnDlGBAxIUPcNHZvHLHwecj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842', 'scsi-SQEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3d7235d6--f117--525f--ba2d--9ab371851486-osd--block--3d7235d6--f117--525f--ba2d--9ab371851486', 'dm-uuid-LVM-RHsRT0YPyFvhCvBP6SDzu5rOjXWQZDRQJGf4somc7uro0NlBFNAFPACPiQn8QAtF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322562 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.322569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-02 00:58:52.322612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part1', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part14', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part15', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part16', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c1d64d47--37ed--5019--b7d5--718691437d08-osd--block--c1d64d47--37ed--5019--b7d5--718691437d08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m5OWFx-qV6J-SOnf-y3AO-7CMl-3bkk-865ljm', 'scsi-0QEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351', 'scsi-SQEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3d7235d6--f117--525f--ba2d--9ab371851486-osd--block--3d7235d6--f117--525f--ba2d--9ab371851486'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fExIFV-GGb0-7Kbe-xUmS-cmPa-fq8j-EcF8yi', 'scsi-0QEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506', 'scsi-SQEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb', 'scsi-SQEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-02 00:58:52.322647 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.322651 | orchestrator | 2026-03-02 00:58:52.322655 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-02 00:58:52.322658 | orchestrator | Monday 02 March 2026 00:56:58 +0000 (0:00:00.545) 0:00:16.985 ********** 2026-03-02 00:58:52.322663 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--271875e3--8908--5e0e--b413--64afee9519da-osd--block--271875e3--8908--5e0e--b413--64afee9519da', 'dm-uuid-LVM-WNv815OLXXMZRQbtiKCV4Kr3DgLyU9EOVlEc6MsXQpd8yGIWlVJyJqw6pfocxWsk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322668 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52125f52--6af3--5290--9fed--9584660c39a2-osd--block--52125f52--6af3--5290--9fed--9584660c39a2', 'dm-uuid-LVM-88ZyFcat0F1j3lRWAWVpLMTFRXW0sRdGtrjcGy2UECAST2MzryQfRyInmupddH55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322724 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322740 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322750 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322761 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322765 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de3a51bd--019b--527a--8dea--ff4c94e5d801-osd--block--de3a51bd--019b--527a--8dea--ff4c94e5d801', 'dm-uuid-LVM-X7cvTTEIT9w4bQ22AYz8LHyJx3B21eeHRkfsquG31QVc2M6iZhDe3TjYXiYe1V7w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322773 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a84d633--ba5b--5049--b6da--2482ee8b3083-osd--block--8a84d633--ba5b--5049--b6da--2482ee8b3083', 'dm-uuid-LVM-0raIyonegJuhMDcTKEhe4ST5v39sRW18JPOuu0DO5SvF8e5nlgH3QJUQBYP7184M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322790 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d7ed2fa5-0b2b-423c-8c57-00f5dd79e8b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322803 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--271875e3--8908--5e0e--b413--64afee9519da-osd--block--271875e3--8908--5e0e--b413--64afee9519da'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GibLfR-79z8-CZQQ-AaT6-vyUn-mfC4-7uiS0U', 'scsi-0QEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7', 'scsi-SQEMU_QEMU_HARDDISK_e7c30f24-07cf-4e73-8c7c-bba1057c8cb7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--52125f52--6af3--5290--9fed--9584660c39a2-osd--block--52125f52--6af3--5290--9fed--9584660c39a2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3udVJJ-Z4dW-1pEk-THR1-4Uh3-fd0B-MD3V3e', 'scsi-0QEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311', 'scsi-SQEMU_QEMU_HARDDISK_7341868e-8f6a-460c-870a-5a0cce1fa311'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f', 'scsi-SQEMU_QEMU_HARDDISK_24fb8e5a-509d-4406-a727-cf15b40a450f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322836 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.322840 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322847 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322854 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c1d64d47--37ed--5019--b7d5--718691437d08-osd--block--c1d64d47--37ed--5019--b7d5--718691437d08', 'dm-uuid-LVM-C7wI1BFuiaw8aSvNbWcEoZ1EQsbpTxTnNfm95Z36zsnDlGBAxIUPcNHZvHLHwecj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322858 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322862 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3d7235d6--f117--525f--ba2d--9ab371851486-osd--block--3d7235d6--f117--525f--ba2d--9ab371851486', 'dm-uuid-LVM-RHsRT0YPyFvhCvBP6SDzu5rOjXWQZDRQJGf4somc7uro0NlBFNAFPACPiQn8QAtF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322866 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322872 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322880 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322887 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322891 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322895 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322907 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322917 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e8f2e0c-b41f-4b83-9d6c-a0655d053bba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322926 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322936 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de3a51bd--019b--527a--8dea--ff4c94e5d801-osd--block--de3a51bd--019b--527a--8dea--ff4c94e5d801'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QqUOyE-CENj-AKoE-jSXL-82K2-5S2Y-c6X3Nb', 'scsi-0QEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116', 'scsi-SQEMU_QEMU_HARDDISK_5b76853c-a11b-45e9-97a5-74de733f1116'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8a84d633--ba5b--5049--b6da--2482ee8b3083-osd--block--8a84d633--ba5b--5049--b6da--2482ee8b3083'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XuzJqd-HbZT-pgPn-w1pW-C0el-OSmi-JJyU7B', 'scsi-0QEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077', 'scsi-SQEMU_QEMU_HARDDISK_34a77e0f-df07-4c87-b046-7d039bca2077'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322965 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part1', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part14', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part15', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part16', 'scsi-SQEMU_QEMU_HARDDISK_efa3f915-a5b5-4a27-b49b-496543c700c8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322979 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842', 'scsi-SQEMU_QEMU_HARDDISK_b2184af5-6da0-496d-b48a-b0daa217c842'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c1d64d47--37ed--5019--b7d5--718691437d08-osd--block--c1d64d47--37ed--5019--b7d5--718691437d08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m5OWFx-qV6J-SOnf-y3AO-7CMl-3bkk-865ljm', 'scsi-0QEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351', 'scsi-SQEMU_QEMU_HARDDISK_a8122f95-b81e-4023-b303-8950dd4c9351'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.322996 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3d7235d6--f117--525f--ba2d--9ab371851486-osd--block--3d7235d6--f117--525f--ba2d--9ab371851486'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fExIFV-GGb0-7Kbe-xUmS-cmPa-fq8j-EcF8yi', 'scsi-0QEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506', 'scsi-SQEMU_QEMU_HARDDISK_ac18fc5b-7614-46f9-bf3c-282e02a3d506'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.323007 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.323013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb', 'scsi-SQEMU_QEMU_HARDDISK_3458d56d-fe8a-4fae-86e7-5458fccbe7bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.323023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-02-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-02 00:58:52.323030 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.323037 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.323044 | orchestrator | 2026-03-02 00:58:52.323049 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-02 00:58:52.323053 | orchestrator | Monday 02 March 2026 00:56:58 +0000 (0:00:00.597) 0:00:17.583 ********** 2026-03-02 00:58:52.323060 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.323064 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.323068 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.323072 | orchestrator | 2026-03-02 00:58:52.323076 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-02 00:58:52.323079 | orchestrator | Monday 02 March 2026 00:56:59 +0000 (0:00:00.728) 0:00:18.311 ********** 2026-03-02 00:58:52.323083 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.323087 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.323091 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.323095 | orchestrator | 2026-03-02 00:58:52.323098 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-02 00:58:52.323102 | orchestrator | Monday 02 March 2026 00:56:59 +0000 (0:00:00.493) 0:00:18.805 ********** 2026-03-02 00:58:52.323106 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.323110 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.323113 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.323117 | orchestrator | 2026-03-02 00:58:52.323121 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-02 00:58:52.323125 | orchestrator | Monday 02 March 2026 00:57:01 +0000 (0:00:01.580) 0:00:20.385 ********** 2026-03-02 00:58:52.323133 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.323137 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.323140 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.323144 | orchestrator | 2026-03-02 00:58:52.323148 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-02 00:58:52.323152 | orchestrator | Monday 02 March 2026 00:57:01 +0000 (0:00:00.285) 0:00:20.671 ********** 2026-03-02 00:58:52.323156 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.323159 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.323163 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.323167 | orchestrator | 2026-03-02 00:58:52.323171 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-02 00:58:52.323174 | orchestrator | Monday 02 March 2026 00:57:02 +0000 (0:00:00.438) 0:00:21.110 ********** 2026-03-02 00:58:52.323178 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.323182 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.323186 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.323190 | orchestrator | 2026-03-02 00:58:52.323194 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-02 00:58:52.323197 | orchestrator | Monday 02 March 2026 00:57:02 +0000 (0:00:00.510) 0:00:21.621 ********** 2026-03-02 00:58:52.323283 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-02 00:58:52.323290 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-02 00:58:52.323294 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-02 00:58:52.323298 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-02 00:58:52.323302 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-02 00:58:52.323305 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-02 00:58:52.323309 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-02 00:58:52.323313 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-02 00:58:52.323317 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-02 00:58:52.323321 | orchestrator | 2026-03-02 00:58:52.323325 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-02 00:58:52.323329 | orchestrator | Monday 02 March 2026 00:57:03 +0000 (0:00:00.830) 0:00:22.452 ********** 2026-03-02 00:58:52.323333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-02 00:58:52.323337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-02 00:58:52.323340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-02 00:58:52.323344 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.323348 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-02 00:58:52.323352 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-02 00:58:52.323355 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-02 00:58:52.323359 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.323363 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-02 00:58:52.323367 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-02 00:58:52.323370 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-02 00:58:52.323374 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.323378 | orchestrator | 2026-03-02 00:58:52.323382 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-02 00:58:52.323385 | orchestrator | Monday 02 March 2026 00:57:03 +0000 (0:00:00.356) 0:00:22.808 ********** 2026-03-02 00:58:52.323390 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 00:58:52.323394 | orchestrator | 2026-03-02 00:58:52.323398 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-02 00:58:52.323402 | orchestrator | Monday 02 March 2026 00:57:04 +0000 (0:00:00.678) 0:00:23.486 ********** 2026-03-02 00:58:52.323413 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.323417 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.323421 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.323425 | orchestrator | 2026-03-02 00:58:52.323429 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-02 00:58:52.323433 | orchestrator | Monday 02 March 2026 00:57:04 +0000 (0:00:00.328) 0:00:23.815 ********** 2026-03-02 00:58:52.323437 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.323441 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.323445 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.323448 | orchestrator | 2026-03-02 00:58:52.323453 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-02 00:58:52.323457 | orchestrator | Monday 02 March 2026 00:57:05 +0000 (0:00:00.322) 0:00:24.138 ********** 2026-03-02 00:58:52.323460 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.323468 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.323472 | orchestrator | skipping: [testbed-node-5] 2026-03-02 00:58:52.323476 | orchestrator | 2026-03-02 00:58:52.323480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-02 00:58:52.323484 | orchestrator | Monday 02 March 2026 00:57:05 +0000 (0:00:00.313) 0:00:24.452 ********** 2026-03-02 00:58:52.323487 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.323491 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.323516 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.323520 | orchestrator | 2026-03-02 00:58:52.323524 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-02 00:58:52.323528 | orchestrator | Monday 02 March 2026 00:57:06 +0000 (0:00:00.650) 0:00:25.102 ********** 2026-03-02 00:58:52.323532 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:58:52.323536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:58:52.323540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:58:52.323543 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.323547 | orchestrator | 2026-03-02 00:58:52.323551 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-02 00:58:52.323555 | orchestrator | Monday 02 March 2026 00:57:06 +0000 (0:00:00.404) 0:00:25.507 ********** 2026-03-02 00:58:52.323559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:58:52.323563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:58:52.323566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:58:52.323570 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.323574 | orchestrator | 2026-03-02 00:58:52.323578 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-02 00:58:52.323581 | orchestrator | Monday 02 March 2026 00:57:06 +0000 (0:00:00.364) 0:00:25.871 ********** 2026-03-02 00:58:52.323585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-02 00:58:52.323589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-02 00:58:52.323593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-02 00:58:52.323596 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.323600 | orchestrator | 2026-03-02 00:58:52.323604 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-02 00:58:52.323608 | orchestrator | Monday 02 March 2026 00:57:07 +0000 (0:00:00.375) 0:00:26.247 ********** 2026-03-02 00:58:52.323612 | orchestrator | ok: [testbed-node-3] 2026-03-02 00:58:52.323616 | orchestrator | ok: [testbed-node-4] 2026-03-02 00:58:52.323619 | orchestrator | ok: [testbed-node-5] 2026-03-02 00:58:52.323623 | orchestrator | 2026-03-02 00:58:52.323627 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-02 00:58:52.323631 | orchestrator | Monday 02 March 2026 00:57:07 +0000 (0:00:00.393) 0:00:26.641 ********** 2026-03-02 00:58:52.323635 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-02 00:58:52.323643 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-02 00:58:52.323647 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-02 00:58:52.323650 | orchestrator | 2026-03-02 00:58:52.323654 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-02 00:58:52.323658 | orchestrator | Monday 02 March 2026 00:57:08 +0000 (0:00:00.540) 0:00:27.182 ********** 2026-03-02 00:58:52.323662 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-02 00:58:52.323666 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-02 00:58:52.323670 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-02 00:58:52.323674 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-02 00:58:52.323677 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-02 00:58:52.323681 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-02 00:58:52.323685 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-02 00:58:52.323689 | orchestrator | 2026-03-02 00:58:52.323693 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-02 00:58:52.323697 | orchestrator | Monday 02 March 2026 00:57:09 +0000 (0:00:01.151) 0:00:28.333 ********** 2026-03-02 00:58:52.323700 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-02 00:58:52.323704 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-02 00:58:52.323708 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-02 00:58:52.323712 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-02 00:58:52.323715 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-02 00:58:52.323719 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-02 00:58:52.323726 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-02 00:58:52.323730 | orchestrator | 2026-03-02 00:58:52.323734 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-02 00:58:52.323738 | orchestrator | Monday 02 March 2026 00:57:11 +0000 (0:00:01.950) 0:00:30.283 ********** 2026-03-02 00:58:52.323741 | orchestrator | skipping: [testbed-node-3] 2026-03-02 00:58:52.323745 | orchestrator | skipping: [testbed-node-4] 2026-03-02 00:58:52.323749 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-02 00:58:52.323753 | orchestrator | 2026-03-02 00:58:52.323757 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-02 00:58:52.323761 | orchestrator | Monday 02 March 2026 00:57:11 +0000 (0:00:00.396) 0:00:30.679 ********** 2026-03-02 00:58:52.323767 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-02 00:58:52.323773 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-02 00:58:52.323777 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-02 00:58:52.323781 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-02 00:58:52.323788 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-02 00:58:52.323792 | orchestrator | 2026-03-02 00:58:52.323796 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-02 00:58:52.323800 | orchestrator | Monday 02 March 2026 00:57:56 +0000 (0:00:45.035) 0:01:15.715 ********** 2026-03-02 00:58:52.323804 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323807 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323811 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323815 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323819 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323822 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323826 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-02 00:58:52.323830 | orchestrator | 2026-03-02 00:58:52.323834 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-02 00:58:52.323838 | orchestrator | Monday 02 March 2026 00:58:21 +0000 (0:00:24.574) 0:01:40.289 ********** 2026-03-02 00:58:52.323841 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323845 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323849 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323853 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323857 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323861 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323864 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-02 00:58:52.323868 | orchestrator | 2026-03-02 00:58:52.323872 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-02 00:58:52.323876 | orchestrator | Monday 02 March 2026 00:58:33 +0000 (0:00:11.994) 0:01:52.283 ********** 2026-03-02 00:58:52.323880 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323884 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-02 00:58:52.323887 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-02 00:58:52.323891 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323895 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-02 00:58:52.323901 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-02 00:58:52.323905 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323909 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-02 00:58:52.323913 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-02 00:58:52.323916 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323920 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-02 00:58:52.323931 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-02 00:58:52.323941 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323948 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-02 00:58:52.323954 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-02 00:58:52.323960 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-02 00:58:52.323967 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-02 00:58:52.323974 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-02 00:58:52.323981 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-02 00:58:52.323988 | orchestrator | 2026-03-02 00:58:52.323995 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:58:52.324002 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-02 00:58:52.324010 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-02 00:58:52.324016 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-02 00:58:52.324023 | orchestrator | 2026-03-02 00:58:52.324029 | orchestrator | 2026-03-02 00:58:52.324036 | orchestrator | 2026-03-02 00:58:52.324043 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:58:52.324050 | orchestrator | Monday 02 March 2026 00:58:50 +0000 (0:00:17.008) 0:02:09.292 ********** 2026-03-02 00:58:52.324057 | orchestrator | =============================================================================== 2026-03-02 00:58:52.324063 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.04s 2026-03-02 00:58:52.324068 | orchestrator | generate keys ---------------------------------------------------------- 24.57s 2026-03-02 00:58:52.324072 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.01s 2026-03-02 00:58:52.324076 | orchestrator | get keys from monitors ------------------------------------------------- 11.99s 2026-03-02 00:58:52.324081 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.21s 2026-03-02 00:58:52.324085 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.95s 2026-03-02 00:58:52.324090 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.78s 2026-03-02 00:58:52.324094 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.58s 2026-03-02 00:58:52.324098 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.15s 2026-03-02 00:58:52.324103 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.87s 2026-03-02 00:58:52.324107 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.83s 2026-03-02 00:58:52.324111 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.83s 2026-03-02 00:58:52.324116 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.73s 2026-03-02 00:58:52.324120 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.68s 2026-03-02 00:58:52.324124 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2026-03-02 00:58:52.324129 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.65s 2026-03-02 00:58:52.324133 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2026-03-02 00:58:52.324137 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.62s 2026-03-02 00:58:52.324142 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.60s 2026-03-02 00:58:52.324146 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.60s 2026-03-02 00:58:52.324155 | orchestrator | 2026-03-02 00:58:52 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:52.324160 | orchestrator | 2026-03-02 00:58:52 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:58:52.324993 | orchestrator | 2026-03-02 00:58:52 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:52.325098 | orchestrator | 2026-03-02 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:55.374266 | orchestrator | 2026-03-02 00:58:55 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:55.375695 | orchestrator | 2026-03-02 00:58:55 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:58:55.377706 | orchestrator | 2026-03-02 00:58:55 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:55.378340 | orchestrator | 2026-03-02 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:58:58.425202 | orchestrator | 2026-03-02 00:58:58 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:58:58.426380 | orchestrator | 2026-03-02 00:58:58 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:58:58.427999 | orchestrator | 2026-03-02 00:58:58 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:58:58.428044 | orchestrator | 2026-03-02 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:01.474191 | orchestrator | 2026-03-02 00:59:01 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:01.480903 | orchestrator | 2026-03-02 00:59:01 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:59:01.482449 | orchestrator | 2026-03-02 00:59:01 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:59:01.482591 | orchestrator | 2026-03-02 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:04.524619 | orchestrator | 2026-03-02 00:59:04 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:04.526744 | orchestrator | 2026-03-02 00:59:04 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:59:04.528587 | orchestrator | 2026-03-02 00:59:04 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:59:04.528798 | orchestrator | 2026-03-02 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:07.578070 | orchestrator | 2026-03-02 00:59:07 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:07.578760 | orchestrator | 2026-03-02 00:59:07 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:59:07.580583 | orchestrator | 2026-03-02 00:59:07 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:59:07.580636 | orchestrator | 2026-03-02 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:10.618175 | orchestrator | 2026-03-02 00:59:10 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:10.618250 | orchestrator | 2026-03-02 00:59:10 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:59:10.618767 | orchestrator | 2026-03-02 00:59:10 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:59:10.618784 | orchestrator | 2026-03-02 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:13.670822 | orchestrator | 2026-03-02 00:59:13 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:13.672847 | orchestrator | 2026-03-02 00:59:13 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:59:13.675433 | orchestrator | 2026-03-02 00:59:13 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:59:13.675852 | orchestrator | 2026-03-02 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:16.721155 | orchestrator | 2026-03-02 00:59:16 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:16.722250 | orchestrator | 2026-03-02 00:59:16 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:59:16.724203 | orchestrator | 2026-03-02 00:59:16 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state STARTED 2026-03-02 00:59:16.724238 | orchestrator | 2026-03-02 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:19.778922 | orchestrator | 2026-03-02 00:59:19 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:19.779020 | orchestrator | 2026-03-02 00:59:19 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:59:19.782697 | orchestrator | 2026-03-02 00:59:19 | INFO  | Task 1d9a69c5-7794-4c1a-af62-a211c6f6f02d is in state SUCCESS 2026-03-02 00:59:19.784275 | orchestrator | 2026-03-02 00:59:19.784330 | orchestrator | 2026-03-02 00:59:19.784339 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 00:59:19.784348 | orchestrator | 2026-03-02 00:59:19.784356 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 00:59:19.784363 | orchestrator | Monday 02 March 2026 00:57:38 +0000 (0:00:00.290) 0:00:00.290 ********** 2026-03-02 00:59:19.784584 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.784594 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.784601 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.784609 | orchestrator | 2026-03-02 00:59:19.784617 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 00:59:19.784625 | orchestrator | Monday 02 March 2026 00:57:38 +0000 (0:00:00.313) 0:00:00.603 ********** 2026-03-02 00:59:19.784632 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-02 00:59:19.784640 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-02 00:59:19.784648 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-02 00:59:19.784655 | orchestrator | 2026-03-02 00:59:19.784663 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-02 00:59:19.784671 | orchestrator | 2026-03-02 00:59:19.784695 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-02 00:59:19.784703 | orchestrator | Monday 02 March 2026 00:57:39 +0000 (0:00:00.456) 0:00:01.060 ********** 2026-03-02 00:59:19.784710 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:59:19.784719 | orchestrator | 2026-03-02 00:59:19.784727 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-02 00:59:19.784734 | orchestrator | Monday 02 March 2026 00:57:39 +0000 (0:00:00.531) 0:00:01.592 ********** 2026-03-02 00:59:19.784748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:59:19.784801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:59:19.784811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:59:19.784825 | orchestrator | 2026-03-02 00:59:19.784833 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-02 00:59:19.784842 | orchestrator | Monday 02 March 2026 00:57:40 +0000 (0:00:01.246) 0:00:02.838 ********** 2026-03-02 00:59:19.784848 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.784855 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.784863 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.784871 | orchestrator | 2026-03-02 00:59:19.784879 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-02 00:59:19.784886 | orchestrator | Monday 02 March 2026 00:57:41 +0000 (0:00:00.436) 0:00:03.274 ********** 2026-03-02 00:59:19.784893 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-02 00:59:19.784907 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-02 00:59:19.784914 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-02 00:59:19.784921 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-02 00:59:19.784928 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-02 00:59:19.784935 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-02 00:59:19.784944 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-02 00:59:19.784951 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-02 00:59:19.784960 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-02 00:59:19.784966 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-02 00:59:19.784978 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-02 00:59:19.784986 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-02 00:59:19.784994 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-02 00:59:19.785001 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-02 00:59:19.785015 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-02 00:59:19.785025 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-02 00:59:19.785032 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-02 00:59:19.785040 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-02 00:59:19.785047 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-02 00:59:19.785054 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-02 00:59:19.785060 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-02 00:59:19.785067 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-02 00:59:19.785074 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-02 00:59:19.785081 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-02 00:59:19.785090 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-02 00:59:19.785100 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-02 00:59:19.785107 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-02 00:59:19.785114 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-02 00:59:19.785121 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-02 00:59:19.785128 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-02 00:59:19.785137 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-02 00:59:19.785143 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-02 00:59:19.785152 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-02 00:59:19.785160 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-02 00:59:19.785168 | orchestrator | 2026-03-02 00:59:19.785175 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-02 00:59:19.785181 | orchestrator | Monday 02 March 2026 00:57:42 +0000 (0:00:00.731) 0:00:04.005 ********** 2026-03-02 00:59:19.785188 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.785195 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.785202 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.785209 | orchestrator | 2026-03-02 00:59:19.785216 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-02 00:59:19.785224 | orchestrator | Monday 02 March 2026 00:57:42 +0000 (0:00:00.304) 0:00:04.310 ********** 2026-03-02 00:59:19.785232 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785241 | orchestrator | 2026-03-02 00:59:19.785255 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-02 00:59:19.785263 | orchestrator | Monday 02 March 2026 00:57:42 +0000 (0:00:00.140) 0:00:04.450 ********** 2026-03-02 00:59:19.785277 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785285 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.785292 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.785300 | orchestrator | 2026-03-02 00:59:19.785308 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-02 00:59:19.785316 | orchestrator | Monday 02 March 2026 00:57:43 +0000 (0:00:00.456) 0:00:04.906 ********** 2026-03-02 00:59:19.785324 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.785332 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.785339 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.785347 | orchestrator | 2026-03-02 00:59:19.785355 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-02 00:59:19.785364 | orchestrator | Monday 02 March 2026 00:57:43 +0000 (0:00:00.298) 0:00:05.205 ********** 2026-03-02 00:59:19.785372 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785379 | orchestrator | 2026-03-02 00:59:19.785393 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-02 00:59:19.785401 | orchestrator | Monday 02 March 2026 00:57:43 +0000 (0:00:00.132) 0:00:05.337 ********** 2026-03-02 00:59:19.785410 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785417 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.785425 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.785432 | orchestrator | 2026-03-02 00:59:19.785468 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-02 00:59:19.785476 | orchestrator | Monday 02 March 2026 00:57:43 +0000 (0:00:00.295) 0:00:05.632 ********** 2026-03-02 00:59:19.785484 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.785491 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.785498 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.785505 | orchestrator | 2026-03-02 00:59:19.785512 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-02 00:59:19.785519 | orchestrator | Monday 02 March 2026 00:57:44 +0000 (0:00:00.303) 0:00:05.936 ********** 2026-03-02 00:59:19.785526 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785533 | orchestrator | 2026-03-02 00:59:19.785540 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-02 00:59:19.785548 | orchestrator | Monday 02 March 2026 00:57:44 +0000 (0:00:00.323) 0:00:06.259 ********** 2026-03-02 00:59:19.785555 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785562 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.785569 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.785576 | orchestrator | 2026-03-02 00:59:19.785584 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-02 00:59:19.785591 | orchestrator | Monday 02 March 2026 00:57:44 +0000 (0:00:00.309) 0:00:06.568 ********** 2026-03-02 00:59:19.785598 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.785605 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.785613 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.785620 | orchestrator | 2026-03-02 00:59:19.785627 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-02 00:59:19.785635 | orchestrator | Monday 02 March 2026 00:57:44 +0000 (0:00:00.300) 0:00:06.869 ********** 2026-03-02 00:59:19.785642 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785649 | orchestrator | 2026-03-02 00:59:19.785657 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-02 00:59:19.785664 | orchestrator | Monday 02 March 2026 00:57:45 +0000 (0:00:00.120) 0:00:06.989 ********** 2026-03-02 00:59:19.785672 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785679 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.785687 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.785695 | orchestrator | 2026-03-02 00:59:19.785703 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-02 00:59:19.785710 | orchestrator | Monday 02 March 2026 00:57:45 +0000 (0:00:00.308) 0:00:07.297 ********** 2026-03-02 00:59:19.785725 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.785732 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.785740 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.785747 | orchestrator | 2026-03-02 00:59:19.785754 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-02 00:59:19.785762 | orchestrator | Monday 02 March 2026 00:57:45 +0000 (0:00:00.485) 0:00:07.783 ********** 2026-03-02 00:59:19.785769 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785776 | orchestrator | 2026-03-02 00:59:19.785783 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-02 00:59:19.785789 | orchestrator | Monday 02 March 2026 00:57:46 +0000 (0:00:00.137) 0:00:07.920 ********** 2026-03-02 00:59:19.785796 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785803 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.785810 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.785817 | orchestrator | 2026-03-02 00:59:19.785825 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-02 00:59:19.785833 | orchestrator | Monday 02 March 2026 00:57:46 +0000 (0:00:00.301) 0:00:08.222 ********** 2026-03-02 00:59:19.785841 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.785849 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.785857 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.785865 | orchestrator | 2026-03-02 00:59:19.785872 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-02 00:59:19.785880 | orchestrator | Monday 02 March 2026 00:57:46 +0000 (0:00:00.312) 0:00:08.535 ********** 2026-03-02 00:59:19.785887 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785894 | orchestrator | 2026-03-02 00:59:19.785902 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-02 00:59:19.785910 | orchestrator | Monday 02 March 2026 00:57:46 +0000 (0:00:00.133) 0:00:08.668 ********** 2026-03-02 00:59:19.785917 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.785925 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.785932 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.785939 | orchestrator | 2026-03-02 00:59:19.785946 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-02 00:59:19.785961 | orchestrator | Monday 02 March 2026 00:57:47 +0000 (0:00:00.307) 0:00:08.976 ********** 2026-03-02 00:59:19.785968 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.785975 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.785982 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.785989 | orchestrator | 2026-03-02 00:59:19.785996 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-02 00:59:19.786003 | orchestrator | Monday 02 March 2026 00:57:47 +0000 (0:00:00.518) 0:00:09.494 ********** 2026-03-02 00:59:19.786010 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.786068 | orchestrator | 2026-03-02 00:59:19.786075 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-02 00:59:19.786083 | orchestrator | Monday 02 March 2026 00:57:47 +0000 (0:00:00.125) 0:00:09.620 ********** 2026-03-02 00:59:19.786091 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.786098 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.786106 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.786113 | orchestrator | 2026-03-02 00:59:19.786120 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-02 00:59:19.786127 | orchestrator | Monday 02 March 2026 00:57:48 +0000 (0:00:00.288) 0:00:09.908 ********** 2026-03-02 00:59:19.786134 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.786148 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.786156 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.786164 | orchestrator | 2026-03-02 00:59:19.786171 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-02 00:59:19.786179 | orchestrator | Monday 02 March 2026 00:57:48 +0000 (0:00:00.324) 0:00:10.233 ********** 2026-03-02 00:59:19.786187 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.786204 | orchestrator | 2026-03-02 00:59:19.786212 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-02 00:59:19.786220 | orchestrator | Monday 02 March 2026 00:57:48 +0000 (0:00:00.138) 0:00:10.371 ********** 2026-03-02 00:59:19.786227 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.786235 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.786244 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.786252 | orchestrator | 2026-03-02 00:59:19.786261 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-02 00:59:19.786269 | orchestrator | Monday 02 March 2026 00:57:48 +0000 (0:00:00.493) 0:00:10.865 ********** 2026-03-02 00:59:19.786277 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.786285 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.786293 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.786300 | orchestrator | 2026-03-02 00:59:19.786308 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-02 00:59:19.786315 | orchestrator | Monday 02 March 2026 00:57:49 +0000 (0:00:00.301) 0:00:11.166 ********** 2026-03-02 00:59:19.786323 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.786331 | orchestrator | 2026-03-02 00:59:19.786338 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-02 00:59:19.786348 | orchestrator | Monday 02 March 2026 00:57:49 +0000 (0:00:00.116) 0:00:11.282 ********** 2026-03-02 00:59:19.786356 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.786363 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.786370 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.786377 | orchestrator | 2026-03-02 00:59:19.786384 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-02 00:59:19.786391 | orchestrator | Monday 02 March 2026 00:57:49 +0000 (0:00:00.296) 0:00:11.579 ********** 2026-03-02 00:59:19.786398 | orchestrator | ok: [testbed-node-0] 2026-03-02 00:59:19.786405 | orchestrator | ok: [testbed-node-1] 2026-03-02 00:59:19.786414 | orchestrator | ok: [testbed-node-2] 2026-03-02 00:59:19.786421 | orchestrator | 2026-03-02 00:59:19.786428 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-02 00:59:19.786436 | orchestrator | Monday 02 March 2026 00:57:50 +0000 (0:00:00.313) 0:00:11.893 ********** 2026-03-02 00:59:19.786466 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.786474 | orchestrator | 2026-03-02 00:59:19.786481 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-02 00:59:19.786487 | orchestrator | Monday 02 March 2026 00:57:50 +0000 (0:00:00.129) 0:00:12.023 ********** 2026-03-02 00:59:19.786494 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.786501 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.786507 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.786514 | orchestrator | 2026-03-02 00:59:19.786521 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-02 00:59:19.786529 | orchestrator | Monday 02 March 2026 00:57:50 +0000 (0:00:00.518) 0:00:12.541 ********** 2026-03-02 00:59:19.786536 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:59:19.786543 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:59:19.786550 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:59:19.786558 | orchestrator | 2026-03-02 00:59:19.786565 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-02 00:59:19.786572 | orchestrator | Monday 02 March 2026 00:57:52 +0000 (0:00:01.670) 0:00:14.211 ********** 2026-03-02 00:59:19.786580 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-02 00:59:19.786588 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-02 00:59:19.786596 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-02 00:59:19.786604 | orchestrator | 2026-03-02 00:59:19.786612 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-02 00:59:19.786621 | orchestrator | Monday 02 March 2026 00:57:54 +0000 (0:00:01.984) 0:00:16.196 ********** 2026-03-02 00:59:19.786639 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-02 00:59:19.786649 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-02 00:59:19.786658 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-02 00:59:19.786665 | orchestrator | 2026-03-02 00:59:19.786674 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-02 00:59:19.786693 | orchestrator | Monday 02 March 2026 00:57:56 +0000 (0:00:02.551) 0:00:18.748 ********** 2026-03-02 00:59:19.786700 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-02 00:59:19.786707 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-02 00:59:19.786715 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-02 00:59:19.786722 | orchestrator | 2026-03-02 00:59:19.786729 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-02 00:59:19.786737 | orchestrator | Monday 02 March 2026 00:57:59 +0000 (0:00:02.334) 0:00:21.082 ********** 2026-03-02 00:59:19.786744 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.786752 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.786759 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.786766 | orchestrator | 2026-03-02 00:59:19.786773 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-02 00:59:19.786787 | orchestrator | Monday 02 March 2026 00:57:59 +0000 (0:00:00.285) 0:00:21.367 ********** 2026-03-02 00:59:19.786795 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.786803 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.786810 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.786818 | orchestrator | 2026-03-02 00:59:19.786824 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-02 00:59:19.786832 | orchestrator | Monday 02 March 2026 00:57:59 +0000 (0:00:00.311) 0:00:21.679 ********** 2026-03-02 00:59:19.786839 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:59:19.786847 | orchestrator | 2026-03-02 00:59:19.786854 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-02 00:59:19.786862 | orchestrator | Monday 02 March 2026 00:58:00 +0000 (0:00:00.775) 0:00:22.456 ********** 2026-03-02 00:59:19.786873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:59:19.786922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:59:19.786932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:59:19.786945 | orchestrator | 2026-03-02 00:59:19.786953 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-02 00:59:19.786960 | orchestrator | Monday 02 March 2026 00:58:02 +0000 (0:00:01.482) 0:00:23.938 ********** 2026-03-02 00:59:19.786979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-02 00:59:19.786989 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.787002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-02 00:59:19.787015 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.787029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-02 00:59:19.787038 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.787046 | orchestrator | 2026-03-02 00:59:19.787053 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-02 00:59:19.787067 | orchestrator | Monday 02 March 2026 00:58:02 +0000 (0:00:00.633) 0:00:24.572 ********** 2026-03-02 00:59:19.787082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-02 00:59:19.787091 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.787103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-02 00:59:19.787121 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.787140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-02 00:59:19.787149 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.787158 | orchestrator | 2026-03-02 00:59:19.787166 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-02 00:59:19.787175 | orchestrator | Monday 02 March 2026 00:58:03 +0000 (0:00:00.806) 0:00:25.378 ********** 2026-03-02 00:59:19.787183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:59:19.787208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:59:19.787219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-02 00:59:19.787233 | orchestrator | 2026-03-02 00:59:19.787241 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-02 00:59:19.787250 | orchestrator | Monday 02 March 2026 00:58:05 +0000 (0:00:01.535) 0:00:26.913 ********** 2026-03-02 00:59:19.787258 | orchestrator | skipping: [testbed-node-0] 2026-03-02 00:59:19.787266 | orchestrator | skipping: [testbed-node-1] 2026-03-02 00:59:19.787274 | orchestrator | skipping: [testbed-node-2] 2026-03-02 00:59:19.787283 | orchestrator | 2026-03-02 00:59:19.787291 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-02 00:59:19.787299 | orchestrator | Monday 02 March 2026 00:58:05 +0000 (0:00:00.327) 0:00:27.240 ********** 2026-03-02 00:59:19.787307 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 00:59:19.787316 | orchestrator | 2026-03-02 00:59:19.787324 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-02 00:59:19.787336 | orchestrator | Monday 02 March 2026 00:58:05 +0000 (0:00:00.606) 0:00:27.847 ********** 2026-03-02 00:59:19.787344 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:59:19.787351 | orchestrator | 2026-03-02 00:59:19.787358 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-02 00:59:19.787365 | orchestrator | Monday 02 March 2026 00:58:09 +0000 (0:00:03.069) 0:00:30.916 ********** 2026-03-02 00:59:19.787372 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:59:19.787379 | orchestrator | 2026-03-02 00:59:19.787386 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-02 00:59:19.787393 | orchestrator | Monday 02 March 2026 00:58:12 +0000 (0:00:03.546) 0:00:34.462 ********** 2026-03-02 00:59:19.787400 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:59:19.787408 | orchestrator | 2026-03-02 00:59:19.787415 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-02 00:59:19.787421 | orchestrator | Monday 02 March 2026 00:58:28 +0000 (0:00:16.099) 0:00:50.561 ********** 2026-03-02 00:59:19.787428 | orchestrator | 2026-03-02 00:59:19.787434 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-02 00:59:19.787567 | orchestrator | Monday 02 March 2026 00:58:28 +0000 (0:00:00.059) 0:00:50.621 ********** 2026-03-02 00:59:19.787578 | orchestrator | 2026-03-02 00:59:19.787584 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-02 00:59:19.787592 | orchestrator | Monday 02 March 2026 00:58:28 +0000 (0:00:00.059) 0:00:50.680 ********** 2026-03-02 00:59:19.787598 | orchestrator | 2026-03-02 00:59:19.787604 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-02 00:59:19.787619 | orchestrator | Monday 02 March 2026 00:58:28 +0000 (0:00:00.061) 0:00:50.742 ********** 2026-03-02 00:59:19.787625 | orchestrator | changed: [testbed-node-0] 2026-03-02 00:59:19.787633 | orchestrator | changed: [testbed-node-1] 2026-03-02 00:59:19.787639 | orchestrator | changed: [testbed-node-2] 2026-03-02 00:59:19.787646 | orchestrator | 2026-03-02 00:59:19.787653 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 00:59:19.787660 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-02 00:59:19.787668 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-02 00:59:19.787676 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-02 00:59:19.787683 | orchestrator | 2026-03-02 00:59:19.787689 | orchestrator | 2026-03-02 00:59:19.787696 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 00:59:19.787703 | orchestrator | Monday 02 March 2026 00:59:17 +0000 (0:00:48.480) 0:01:39.222 ********** 2026-03-02 00:59:19.787710 | orchestrator | =============================================================================== 2026-03-02 00:59:19.787717 | orchestrator | horizon : Restart horizon container ------------------------------------ 48.48s 2026-03-02 00:59:19.787724 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.10s 2026-03-02 00:59:19.787730 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.55s 2026-03-02 00:59:19.787737 | orchestrator | horizon : Creating Horizon database ------------------------------------- 3.07s 2026-03-02 00:59:19.787743 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.55s 2026-03-02 00:59:19.787749 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.33s 2026-03-02 00:59:19.787755 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.98s 2026-03-02 00:59:19.787761 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.67s 2026-03-02 00:59:19.787767 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.54s 2026-03-02 00:59:19.787773 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.48s 2026-03-02 00:59:19.787779 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.25s 2026-03-02 00:59:19.787785 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.81s 2026-03-02 00:59:19.787790 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2026-03-02 00:59:19.787796 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-03-02 00:59:19.787803 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.63s 2026-03-02 00:59:19.787810 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-03-02 00:59:19.787816 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2026-03-02 00:59:19.787822 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2026-03-02 00:59:19.787829 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-03-02 00:59:19.787835 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.49s 2026-03-02 00:59:19.787842 | orchestrator | 2026-03-02 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:22.813596 | orchestrator | 2026-03-02 00:59:22 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:22.815150 | orchestrator | 2026-03-02 00:59:22 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:59:22.815219 | orchestrator | 2026-03-02 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:25.865063 | orchestrator | 2026-03-02 00:59:25 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:25.866729 | orchestrator | 2026-03-02 00:59:25 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:59:25.866868 | orchestrator | 2026-03-02 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:28.910621 | orchestrator | 2026-03-02 00:59:28 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:28.911105 | orchestrator | 2026-03-02 00:59:28 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state STARTED 2026-03-02 00:59:28.911137 | orchestrator | 2026-03-02 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:31.954966 | orchestrator | 2026-03-02 00:59:31 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:31.955336 | orchestrator | 2026-03-02 00:59:31 | INFO  | Task e30584b4-f227-4386-9d4f-57b9b03dfe6c is in state SUCCESS 2026-03-02 00:59:31.957073 | orchestrator | 2026-03-02 00:59:31 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 00:59:31.957139 | orchestrator | 2026-03-02 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:34.995509 | orchestrator | 2026-03-02 00:59:34 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:34.995775 | orchestrator | 2026-03-02 00:59:34 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 00:59:34.995795 | orchestrator | 2026-03-02 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:38.038644 | orchestrator | 2026-03-02 00:59:38 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:38.041155 | orchestrator | 2026-03-02 00:59:38 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 00:59:38.041238 | orchestrator | 2026-03-02 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:41.083065 | orchestrator | 2026-03-02 00:59:41 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:41.084203 | orchestrator | 2026-03-02 00:59:41 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 00:59:41.084262 | orchestrator | 2026-03-02 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:44.122805 | orchestrator | 2026-03-02 00:59:44 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:44.124003 | orchestrator | 2026-03-02 00:59:44 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 00:59:44.124168 | orchestrator | 2026-03-02 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:47.178505 | orchestrator | 2026-03-02 00:59:47 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:47.179572 | orchestrator | 2026-03-02 00:59:47 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 00:59:47.179611 | orchestrator | 2026-03-02 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:50.225974 | orchestrator | 2026-03-02 00:59:50 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:50.227979 | orchestrator | 2026-03-02 00:59:50 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 00:59:50.228049 | orchestrator | 2026-03-02 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:53.266745 | orchestrator | 2026-03-02 00:59:53 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:53.268826 | orchestrator | 2026-03-02 00:59:53 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 00:59:53.269175 | orchestrator | 2026-03-02 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:56.320428 | orchestrator | 2026-03-02 00:59:56 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:56.322002 | orchestrator | 2026-03-02 00:59:56 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 00:59:56.322216 | orchestrator | 2026-03-02 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-03-02 00:59:59.374130 | orchestrator | 2026-03-02 00:59:59 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 00:59:59.375771 | orchestrator | 2026-03-02 00:59:59 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 00:59:59.375841 | orchestrator | 2026-03-02 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:02.420038 | orchestrator | 2026-03-02 01:00:02 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 01:00:02.421286 | orchestrator | 2026-03-02 01:00:02 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 01:00:02.421339 | orchestrator | 2026-03-02 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:05.467302 | orchestrator | 2026-03-02 01:00:05 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 01:00:05.468821 | orchestrator | 2026-03-02 01:00:05 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 01:00:05.468872 | orchestrator | 2026-03-02 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:08.514231 | orchestrator | 2026-03-02 01:00:08 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 01:00:08.515841 | orchestrator | 2026-03-02 01:00:08 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 01:00:08.515906 | orchestrator | 2026-03-02 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:11.560105 | orchestrator | 2026-03-02 01:00:11 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 01:00:11.561807 | orchestrator | 2026-03-02 01:00:11 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 01:00:11.561870 | orchestrator | 2026-03-02 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:14.601769 | orchestrator | 2026-03-02 01:00:14 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 01:00:14.603109 | orchestrator | 2026-03-02 01:00:14 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 01:00:14.603156 | orchestrator | 2026-03-02 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:17.636142 | orchestrator | 2026-03-02 01:00:17 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state STARTED 2026-03-02 01:00:17.638004 | orchestrator | 2026-03-02 01:00:17 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 01:00:17.638206 | orchestrator | 2026-03-02 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:20.683490 | orchestrator | 2026-03-02 01:00:20.683968 | orchestrator | 2026-03-02 01:00:20.683997 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-02 01:00:20.684007 | orchestrator | 2026-03-02 01:00:20.684014 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-02 01:00:20.684022 | orchestrator | Monday 02 March 2026 00:58:55 +0000 (0:00:00.155) 0:00:00.155 ********** 2026-03-02 01:00:20.684063 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-02 01:00:20.684073 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684080 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684086 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-02 01:00:20.684092 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684098 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-02 01:00:20.684104 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-02 01:00:20.684110 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-02 01:00:20.684116 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-02 01:00:20.684122 | orchestrator | 2026-03-02 01:00:20.684129 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-02 01:00:20.684135 | orchestrator | Monday 02 March 2026 00:59:00 +0000 (0:00:05.000) 0:00:05.156 ********** 2026-03-02 01:00:20.684141 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-02 01:00:20.684148 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684154 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684161 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-02 01:00:20.684168 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684174 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-02 01:00:20.684181 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-02 01:00:20.684187 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-02 01:00:20.684193 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-02 01:00:20.684199 | orchestrator | 2026-03-02 01:00:20.684206 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-02 01:00:20.684212 | orchestrator | Monday 02 March 2026 00:59:04 +0000 (0:00:04.343) 0:00:09.499 ********** 2026-03-02 01:00:20.684220 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-02 01:00:20.684227 | orchestrator | 2026-03-02 01:00:20.684233 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-02 01:00:20.684240 | orchestrator | Monday 02 March 2026 00:59:05 +0000 (0:00:01.058) 0:00:10.558 ********** 2026-03-02 01:00:20.684247 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-02 01:00:20.684255 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684282 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684289 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-02 01:00:20.684295 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684301 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-02 01:00:20.684308 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-02 01:00:20.684344 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-02 01:00:20.684360 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-02 01:00:20.684365 | orchestrator | 2026-03-02 01:00:20.684369 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-02 01:00:20.684372 | orchestrator | Monday 02 March 2026 00:59:20 +0000 (0:00:14.650) 0:00:25.208 ********** 2026-03-02 01:00:20.684377 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-02 01:00:20.684381 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-02 01:00:20.684385 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-02 01:00:20.684389 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-02 01:00:20.684409 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-02 01:00:20.684413 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-02 01:00:20.684417 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-02 01:00:20.684421 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-02 01:00:20.684425 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-02 01:00:20.684428 | orchestrator | 2026-03-02 01:00:20.684432 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-02 01:00:20.684436 | orchestrator | Monday 02 March 2026 00:59:22 +0000 (0:00:02.656) 0:00:27.865 ********** 2026-03-02 01:00:20.684440 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-02 01:00:20.684444 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684448 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684452 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-02 01:00:20.684455 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-02 01:00:20.684459 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-02 01:00:20.684463 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-02 01:00:20.684466 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-02 01:00:20.684470 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-02 01:00:20.684474 | orchestrator | 2026-03-02 01:00:20.684478 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:00:20.684481 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:00:20.684487 | orchestrator | 2026-03-02 01:00:20.684491 | orchestrator | 2026-03-02 01:00:20.684495 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:00:20.684498 | orchestrator | Monday 02 March 2026 00:59:28 +0000 (0:00:06.190) 0:00:34.055 ********** 2026-03-02 01:00:20.684502 | orchestrator | =============================================================================== 2026-03-02 01:00:20.684506 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.65s 2026-03-02 01:00:20.684510 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.19s 2026-03-02 01:00:20.684514 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.00s 2026-03-02 01:00:20.684518 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.34s 2026-03-02 01:00:20.684522 | orchestrator | Check if target directories exist --------------------------------------- 2.66s 2026-03-02 01:00:20.684533 | orchestrator | Create share directory -------------------------------------------------- 1.06s 2026-03-02 01:00:20.684537 | orchestrator | 2026-03-02 01:00:20.684542 | orchestrator | 2026-03-02 01:00:20.684546 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:00:20.684551 | orchestrator | 2026-03-02 01:00:20.684556 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:00:20.684561 | orchestrator | Monday 02 March 2026 00:57:38 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-03-02 01:00:20.684566 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:00:20.684571 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:00:20.684576 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:00:20.684581 | orchestrator | 2026-03-02 01:00:20.684585 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:00:20.684589 | orchestrator | Monday 02 March 2026 00:57:38 +0000 (0:00:00.343) 0:00:00.628 ********** 2026-03-02 01:00:20.684598 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-02 01:00:20.684604 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-02 01:00:20.684608 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-02 01:00:20.684612 | orchestrator | 2026-03-02 01:00:20.684617 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-02 01:00:20.684621 | orchestrator | 2026-03-02 01:00:20.684626 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-02 01:00:20.684630 | orchestrator | Monday 02 March 2026 00:57:39 +0000 (0:00:00.479) 0:00:01.108 ********** 2026-03-02 01:00:20.684635 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:00:20.684639 | orchestrator | 2026-03-02 01:00:20.684643 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-02 01:00:20.684648 | orchestrator | Monday 02 March 2026 00:57:39 +0000 (0:00:00.564) 0:00:01.673 ********** 2026-03-02 01:00:20.684668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.684675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.684685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.684694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684732 | orchestrator | 2026-03-02 01:00:20.684736 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-02 01:00:20.684740 | orchestrator | Monday 02 March 2026 00:57:41 +0000 (0:00:01.822) 0:00:03.495 ********** 2026-03-02 01:00:20.684744 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.684748 | orchestrator | 2026-03-02 01:00:20.684752 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-02 01:00:20.684756 | orchestrator | Monday 02 March 2026 00:57:41 +0000 (0:00:00.136) 0:00:03.631 ********** 2026-03-02 01:00:20.684760 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.684763 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.684767 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.684771 | orchestrator | 2026-03-02 01:00:20.684774 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-02 01:00:20.684778 | orchestrator | Monday 02 March 2026 00:57:42 +0000 (0:00:00.416) 0:00:04.048 ********** 2026-03-02 01:00:20.684782 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 01:00:20.684786 | orchestrator | 2026-03-02 01:00:20.684790 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-02 01:00:20.684796 | orchestrator | Monday 02 March 2026 00:57:43 +0000 (0:00:00.846) 0:00:04.895 ********** 2026-03-02 01:00:20.684800 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:00:20.684804 | orchestrator | 2026-03-02 01:00:20.684808 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-02 01:00:20.684812 | orchestrator | Monday 02 March 2026 00:57:43 +0000 (0:00:00.519) 0:00:05.414 ********** 2026-03-02 01:00:20.684819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.684824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.684834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.684838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.684876 | orchestrator | 2026-03-02 01:00:20.684880 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-02 01:00:20.684884 | orchestrator | Monday 02 March 2026 00:57:47 +0000 (0:00:03.638) 0:00:09.053 ********** 2026-03-02 01:00:20.684888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 01:00:20.684895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.684899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 01:00:20.684903 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.684911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 01:00:20.684925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.684930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 01:00:20.684934 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.684942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 01:00:20.684946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.684956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 01:00:20.684963 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.684967 | orchestrator | 2026-03-02 01:00:20.684971 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-02 01:00:20.684975 | orchestrator | Monday 02 March 2026 00:57:47 +0000 (0:00:00.566) 0:00:09.619 ********** 2026-03-02 01:00:20.684979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 01:00:20.684983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.684991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 01:00:20.684995 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.684999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 01:00:20.685011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.685016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 01:00:20.685020 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.685024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 01:00:20.685028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.685035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 01:00:20.685039 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.685043 | orchestrator | 2026-03-02 01:00:20.685047 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-02 01:00:20.685051 | orchestrator | Monday 02 March 2026 00:57:48 +0000 (0:00:00.763) 0:00:10.383 ********** 2026-03-02 01:00:20.685063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.685067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.685071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.685078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.685083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.685094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.685098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.685102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.685106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.685110 | orchestrator | 2026-03-02 01:00:20.685114 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-02 01:00:20.685118 | orchestrator | Monday 02 March 2026 00:57:51 +0000 (0:00:03.175) 0:00:13.558 ********** 2026-03-02 01:00:20.685125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.685136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.685141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.685145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.685149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.685156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.685163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.685170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.685174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.685178 | orchestrator | 2026-03-02 01:00:20.685182 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-02 01:00:20.685186 | orchestrator | Monday 02 March 2026 00:57:57 +0000 (0:00:05.557) 0:00:19.116 ********** 2026-03-02 01:00:20.685190 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:00:20.685194 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:00:20.685198 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:00:20.685202 | orchestrator | 2026-03-02 01:00:20.685205 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-02 01:00:20.685209 | orchestrator | Monday 02 March 2026 00:57:58 +0000 (0:00:01.653) 0:00:20.769 ********** 2026-03-02 01:00:20.685213 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.685217 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.685220 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.685224 | orchestrator | 2026-03-02 01:00:20.685228 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-02 01:00:20.685232 | orchestrator | Monday 02 March 2026 00:57:59 +0000 (0:00:00.546) 0:00:21.316 ********** 2026-03-02 01:00:20.685235 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.685239 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.685243 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.685247 | orchestrator | 2026-03-02 01:00:20.685250 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-02 01:00:20.685254 | orchestrator | Monday 02 March 2026 00:57:59 +0000 (0:00:00.332) 0:00:21.648 ********** 2026-03-02 01:00:20.685258 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.685261 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.685265 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.685269 | orchestrator | 2026-03-02 01:00:20.685272 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-02 01:00:20.685276 | orchestrator | Monday 02 March 2026 00:58:00 +0000 (0:00:00.492) 0:00:22.141 ********** 2026-03-02 01:00:20.685283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 01:00:20.685291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.685298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 01:00:20.685302 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.685306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 01:00:20.685310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.685330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 01:00:20.685340 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.685348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-02 01:00:20.685353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-02 01:00:20.685360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-02 01:00:20.685364 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.685367 | orchestrator | 2026-03-02 01:00:20.685371 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-02 01:00:20.685375 | orchestrator | Monday 02 March 2026 00:58:00 +0000 (0:00:00.650) 0:00:22.792 ********** 2026-03-02 01:00:20.685379 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.685383 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.685386 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.685390 | orchestrator | 2026-03-02 01:00:20.685394 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-02 01:00:20.685398 | orchestrator | Monday 02 March 2026 00:58:01 +0000 (0:00:00.306) 0:00:23.099 ********** 2026-03-02 01:00:20.685402 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-02 01:00:20.685405 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-02 01:00:20.685409 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-02 01:00:20.685413 | orchestrator | 2026-03-02 01:00:20.685421 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-02 01:00:20.685425 | orchestrator | Monday 02 March 2026 00:58:02 +0000 (0:00:01.567) 0:00:24.667 ********** 2026-03-02 01:00:20.685429 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 01:00:20.685433 | orchestrator | 2026-03-02 01:00:20.685436 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-02 01:00:20.685440 | orchestrator | Monday 02 March 2026 00:58:03 +0000 (0:00:00.964) 0:00:25.632 ********** 2026-03-02 01:00:20.685444 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.685448 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.685451 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.685455 | orchestrator | 2026-03-02 01:00:20.685458 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-02 01:00:20.685462 | orchestrator | Monday 02 March 2026 00:58:04 +0000 (0:00:00.964) 0:00:26.596 ********** 2026-03-02 01:00:20.685466 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-02 01:00:20.685470 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 01:00:20.685473 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-02 01:00:20.685477 | orchestrator | 2026-03-02 01:00:20.685481 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-02 01:00:20.685485 | orchestrator | Monday 02 March 2026 00:58:05 +0000 (0:00:01.213) 0:00:27.810 ********** 2026-03-02 01:00:20.685489 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:00:20.685493 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:00:20.685497 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:00:20.685500 | orchestrator | 2026-03-02 01:00:20.685504 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-02 01:00:20.685508 | orchestrator | Monday 02 March 2026 00:58:06 +0000 (0:00:00.299) 0:00:28.109 ********** 2026-03-02 01:00:20.685511 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-02 01:00:20.685515 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-02 01:00:20.685519 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-02 01:00:20.685526 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-02 01:00:20.685530 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-02 01:00:20.685534 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-02 01:00:20.685538 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-02 01:00:20.685542 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-02 01:00:20.685545 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-02 01:00:20.685549 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-02 01:00:20.685553 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-02 01:00:20.685557 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-02 01:00:20.685560 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-02 01:00:20.685564 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-02 01:00:20.685570 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-02 01:00:20.685574 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-02 01:00:20.685578 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-02 01:00:20.685586 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-02 01:00:20.685590 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-02 01:00:20.685593 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-02 01:00:20.685597 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-02 01:00:20.685601 | orchestrator | 2026-03-02 01:00:20.685605 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-02 01:00:20.685608 | orchestrator | Monday 02 March 2026 00:58:15 +0000 (0:00:09.472) 0:00:37.582 ********** 2026-03-02 01:00:20.685612 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-02 01:00:20.685616 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-02 01:00:20.685620 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-02 01:00:20.685623 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-02 01:00:20.685627 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-02 01:00:20.685631 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-02 01:00:20.685634 | orchestrator | 2026-03-02 01:00:20.685638 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-02 01:00:20.685642 | orchestrator | Monday 02 March 2026 00:58:18 +0000 (0:00:02.890) 0:00:40.473 ********** 2026-03-02 01:00:20.685649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': 2026-03-02 01:00:20 | INFO  | Task e8b80cf5-90ff-4c9a-a5d3-d9978063419f is in state SUCCESS 2026-03-02 01:00:20.687970 | orchestrator | True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.688059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.688070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-02 01:00:20.688088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.688094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.688098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-02 01:00:20.688116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.688122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.688130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-02 01:00:20.688135 | orchestrator | 2026-03-02 01:00:20.688140 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-02 01:00:20.688145 | orchestrator | Monday 02 March 2026 00:58:20 +0000 (0:00:02.265) 0:00:42.738 ********** 2026-03-02 01:00:20.688149 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.688154 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.688158 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.688162 | orchestrator | 2026-03-02 01:00:20.688166 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-02 01:00:20.688169 | orchestrator | Monday 02 March 2026 00:58:21 +0000 (0:00:00.240) 0:00:42.979 ********** 2026-03-02 01:00:20.688173 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:00:20.688177 | orchestrator | 2026-03-02 01:00:20.688181 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-02 01:00:20.688185 | orchestrator | Monday 02 March 2026 00:58:23 +0000 (0:00:02.416) 0:00:45.395 ********** 2026-03-02 01:00:20.688189 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:00:20.688192 | orchestrator | 2026-03-02 01:00:20.688196 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-02 01:00:20.688200 | orchestrator | Monday 02 March 2026 00:58:25 +0000 (0:00:02.452) 0:00:47.848 ********** 2026-03-02 01:00:20.688204 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:00:20.688208 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:00:20.688211 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:00:20.688215 | orchestrator | 2026-03-02 01:00:20.688219 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-02 01:00:20.688223 | orchestrator | Monday 02 March 2026 00:58:26 +0000 (0:00:00.869) 0:00:48.717 ********** 2026-03-02 01:00:20.688226 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:00:20.688230 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:00:20.688234 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:00:20.688237 | orchestrator | 2026-03-02 01:00:20.688241 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-02 01:00:20.688246 | orchestrator | Monday 02 March 2026 00:58:27 +0000 (0:00:00.272) 0:00:48.990 ********** 2026-03-02 01:00:20.688263 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.688267 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.688271 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.688280 | orchestrator | 2026-03-02 01:00:20.688284 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-02 01:00:20.688288 | orchestrator | Monday 02 March 2026 00:58:27 +0000 (0:00:00.300) 0:00:49.291 ********** 2026-03-02 01:00:20.688291 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:00:20.688295 | orchestrator | 2026-03-02 01:00:20.688299 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-02 01:00:20.688303 | orchestrator | Monday 02 March 2026 00:58:41 +0000 (0:00:14.551) 0:01:03.842 ********** 2026-03-02 01:00:20.688306 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:00:20.688310 | orchestrator | 2026-03-02 01:00:20.688331 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-02 01:00:20.688337 | orchestrator | Monday 02 March 2026 00:58:53 +0000 (0:00:11.643) 0:01:15.486 ********** 2026-03-02 01:00:20.688343 | orchestrator | 2026-03-02 01:00:20.688348 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-02 01:00:20.688360 | orchestrator | Monday 02 March 2026 00:58:53 +0000 (0:00:00.064) 0:01:15.550 ********** 2026-03-02 01:00:20.688365 | orchestrator | 2026-03-02 01:00:20.688371 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-02 01:00:20.688380 | orchestrator | Monday 02 March 2026 00:58:53 +0000 (0:00:00.062) 0:01:15.613 ********** 2026-03-02 01:00:20.688387 | orchestrator | 2026-03-02 01:00:20.688393 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-02 01:00:20.688401 | orchestrator | Monday 02 March 2026 00:58:53 +0000 (0:00:00.063) 0:01:15.677 ********** 2026-03-02 01:00:20.688407 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:00:20.688413 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:00:20.688419 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:00:20.688425 | orchestrator | 2026-03-02 01:00:20.688432 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-02 01:00:20.688441 | orchestrator | Monday 02 March 2026 00:59:09 +0000 (0:00:15.758) 0:01:31.435 ********** 2026-03-02 01:00:20.688445 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:00:20.688449 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:00:20.688452 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:00:20.688456 | orchestrator | 2026-03-02 01:00:20.688460 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-02 01:00:20.688470 | orchestrator | Monday 02 March 2026 00:59:14 +0000 (0:00:04.886) 0:01:36.322 ********** 2026-03-02 01:00:20.688474 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:00:20.688478 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:00:20.688481 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:00:20.688490 | orchestrator | 2026-03-02 01:00:20.688494 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-02 01:00:20.688498 | orchestrator | Monday 02 March 2026 00:59:19 +0000 (0:00:05.501) 0:01:41.823 ********** 2026-03-02 01:00:20.688502 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:00:20.688506 | orchestrator | 2026-03-02 01:00:20.688510 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-02 01:00:20.688513 | orchestrator | Monday 02 March 2026 00:59:20 +0000 (0:00:00.619) 0:01:42.443 ********** 2026-03-02 01:00:20.688518 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:00:20.688523 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:00:20.688527 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:00:20.688531 | orchestrator | 2026-03-02 01:00:20.688536 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-02 01:00:20.688540 | orchestrator | Monday 02 March 2026 00:59:21 +0000 (0:00:00.777) 0:01:43.220 ********** 2026-03-02 01:00:20.688544 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:00:20.688549 | orchestrator | 2026-03-02 01:00:20.688553 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-02 01:00:20.688558 | orchestrator | Monday 02 March 2026 00:59:23 +0000 (0:00:01.667) 0:01:44.887 ********** 2026-03-02 01:00:20.688562 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-02 01:00:20.688567 | orchestrator | 2026-03-02 01:00:20.688571 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-02 01:00:20.688576 | orchestrator | Monday 02 March 2026 00:59:36 +0000 (0:00:13.825) 0:01:58.712 ********** 2026-03-02 01:00:20.688580 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-02 01:00:20.688585 | orchestrator | 2026-03-02 01:00:20.688590 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-02 01:00:20.688594 | orchestrator | Monday 02 March 2026 01:00:06 +0000 (0:00:29.572) 0:02:28.285 ********** 2026-03-02 01:00:20.688599 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-02 01:00:20.688605 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-02 01:00:20.688609 | orchestrator | 2026-03-02 01:00:20.688619 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-02 01:00:20.688623 | orchestrator | Monday 02 March 2026 01:00:14 +0000 (0:00:08.371) 0:02:36.657 ********** 2026-03-02 01:00:20.688628 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.688632 | orchestrator | 2026-03-02 01:00:20.688637 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-02 01:00:20.688641 | orchestrator | Monday 02 March 2026 01:00:14 +0000 (0:00:00.121) 0:02:36.779 ********** 2026-03-02 01:00:20.688646 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.688650 | orchestrator | 2026-03-02 01:00:20.688654 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-02 01:00:20.688659 | orchestrator | Monday 02 March 2026 01:00:15 +0000 (0:00:00.108) 0:02:36.887 ********** 2026-03-02 01:00:20.688664 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.688668 | orchestrator | 2026-03-02 01:00:20.688673 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-02 01:00:20.688677 | orchestrator | Monday 02 March 2026 01:00:15 +0000 (0:00:00.113) 0:02:37.001 ********** 2026-03-02 01:00:20.688681 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.688686 | orchestrator | 2026-03-02 01:00:20.688691 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-02 01:00:20.688695 | orchestrator | Monday 02 March 2026 01:00:15 +0000 (0:00:00.430) 0:02:37.431 ********** 2026-03-02 01:00:20.688700 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:00:20.688705 | orchestrator | 2026-03-02 01:00:20.688709 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-02 01:00:20.688713 | orchestrator | Monday 02 March 2026 01:00:19 +0000 (0:00:03.771) 0:02:41.202 ********** 2026-03-02 01:00:20.688720 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:00:20.688726 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:00:20.688735 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:00:20.688744 | orchestrator | 2026-03-02 01:00:20.688752 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:00:20.688760 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-02 01:00:20.688772 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-02 01:00:20.688779 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-02 01:00:20.688785 | orchestrator | 2026-03-02 01:00:20.688791 | orchestrator | 2026-03-02 01:00:20.688798 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:00:20.688804 | orchestrator | Monday 02 March 2026 01:00:19 +0000 (0:00:00.330) 0:02:41.532 ********** 2026-03-02 01:00:20.688810 | orchestrator | =============================================================================== 2026-03-02 01:00:20.688820 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.57s 2026-03-02 01:00:20.688827 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 15.76s 2026-03-02 01:00:20.688833 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.55s 2026-03-02 01:00:20.688838 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.83s 2026-03-02 01:00:20.688844 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.64s 2026-03-02 01:00:20.688850 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.47s 2026-03-02 01:00:20.688856 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 8.37s 2026-03-02 01:00:20.688863 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.56s 2026-03-02 01:00:20.688869 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.50s 2026-03-02 01:00:20.688880 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.89s 2026-03-02 01:00:20.688886 | orchestrator | keystone : Creating default user role ----------------------------------- 3.77s 2026-03-02 01:00:20.688892 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.64s 2026-03-02 01:00:20.688898 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.18s 2026-03-02 01:00:20.688905 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.89s 2026-03-02 01:00:20.688910 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.45s 2026-03-02 01:00:20.688916 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.42s 2026-03-02 01:00:20.688922 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.27s 2026-03-02 01:00:20.688928 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.82s 2026-03-02 01:00:20.688933 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.67s 2026-03-02 01:00:20.688939 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.65s 2026-03-02 01:00:20.688944 | orchestrator | 2026-03-02 01:00:20 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 01:00:20.688950 | orchestrator | 2026-03-02 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:23.713265 | orchestrator | 2026-03-02 01:00:23 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:23.715384 | orchestrator | 2026-03-02 01:00:23 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:23.715610 | orchestrator | 2026-03-02 01:00:23 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state STARTED 2026-03-02 01:00:23.716280 | orchestrator | 2026-03-02 01:00:23 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:23.716939 | orchestrator | 2026-03-02 01:00:23 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:23.718681 | orchestrator | 2026-03-02 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:26.750336 | orchestrator | 2026-03-02 01:00:26 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:26.751079 | orchestrator | 2026-03-02 01:00:26 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:26.753847 | orchestrator | 2026-03-02 01:00:26 | INFO  | Task b4ab6834-4b51-4ab3-b291-05810765cdc0 is in state SUCCESS 2026-03-02 01:00:26.754582 | orchestrator | 2026-03-02 01:00:26 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:26.756099 | orchestrator | 2026-03-02 01:00:26 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:26.756148 | orchestrator | 2026-03-02 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:29.799014 | orchestrator | 2026-03-02 01:00:29 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:29.800933 | orchestrator | 2026-03-02 01:00:29 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:29.803655 | orchestrator | 2026-03-02 01:00:29 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:29.805200 | orchestrator | 2026-03-02 01:00:29 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:00:29.806659 | orchestrator | 2026-03-02 01:00:29 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:29.807566 | orchestrator | 2026-03-02 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:32.844043 | orchestrator | 2026-03-02 01:00:32 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:32.845651 | orchestrator | 2026-03-02 01:00:32 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:32.847631 | orchestrator | 2026-03-02 01:00:32 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:32.849619 | orchestrator | 2026-03-02 01:00:32 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:00:32.851526 | orchestrator | 2026-03-02 01:00:32 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:32.852597 | orchestrator | 2026-03-02 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:35.887705 | orchestrator | 2026-03-02 01:00:35 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:35.890526 | orchestrator | 2026-03-02 01:00:35 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:35.892356 | orchestrator | 2026-03-02 01:00:35 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:35.893763 | orchestrator | 2026-03-02 01:00:35 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:00:35.895507 | orchestrator | 2026-03-02 01:00:35 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:35.895673 | orchestrator | 2026-03-02 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:38.929438 | orchestrator | 2026-03-02 01:00:38 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:38.930885 | orchestrator | 2026-03-02 01:00:38 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:38.932207 | orchestrator | 2026-03-02 01:00:38 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:38.933807 | orchestrator | 2026-03-02 01:00:38 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:00:38.935305 | orchestrator | 2026-03-02 01:00:38 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:38.935340 | orchestrator | 2026-03-02 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:41.975758 | orchestrator | 2026-03-02 01:00:41 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:41.976716 | orchestrator | 2026-03-02 01:00:41 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:41.978143 | orchestrator | 2026-03-02 01:00:41 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:41.980750 | orchestrator | 2026-03-02 01:00:41 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:00:41.982431 | orchestrator | 2026-03-02 01:00:41 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:41.982487 | orchestrator | 2026-03-02 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:45.025372 | orchestrator | 2026-03-02 01:00:45 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:45.027797 | orchestrator | 2026-03-02 01:00:45 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:45.029341 | orchestrator | 2026-03-02 01:00:45 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:45.031396 | orchestrator | 2026-03-02 01:00:45 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:00:45.033159 | orchestrator | 2026-03-02 01:00:45 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:45.033257 | orchestrator | 2026-03-02 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:48.071983 | orchestrator | 2026-03-02 01:00:48 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:48.073566 | orchestrator | 2026-03-02 01:00:48 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:48.075456 | orchestrator | 2026-03-02 01:00:48 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:48.078941 | orchestrator | 2026-03-02 01:00:48 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:00:48.081114 | orchestrator | 2026-03-02 01:00:48 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:48.081516 | orchestrator | 2026-03-02 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:51.119092 | orchestrator | 2026-03-02 01:00:51 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:51.121557 | orchestrator | 2026-03-02 01:00:51 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:51.122652 | orchestrator | 2026-03-02 01:00:51 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:51.124451 | orchestrator | 2026-03-02 01:00:51 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:00:51.125910 | orchestrator | 2026-03-02 01:00:51 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:51.125965 | orchestrator | 2026-03-02 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:54.169240 | orchestrator | 2026-03-02 01:00:54 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:54.172928 | orchestrator | 2026-03-02 01:00:54 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:54.174463 | orchestrator | 2026-03-02 01:00:54 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:54.175914 | orchestrator | 2026-03-02 01:00:54 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:00:54.177401 | orchestrator | 2026-03-02 01:00:54 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:54.177437 | orchestrator | 2026-03-02 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:00:57.219329 | orchestrator | 2026-03-02 01:00:57 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:00:57.221085 | orchestrator | 2026-03-02 01:00:57 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:00:57.222828 | orchestrator | 2026-03-02 01:00:57 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:00:57.225637 | orchestrator | 2026-03-02 01:00:57 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:00:57.227081 | orchestrator | 2026-03-02 01:00:57 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:00:57.227342 | orchestrator | 2026-03-02 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:00.263970 | orchestrator | 2026-03-02 01:01:00 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:00.264032 | orchestrator | 2026-03-02 01:01:00 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:00.266315 | orchestrator | 2026-03-02 01:01:00 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:00.266451 | orchestrator | 2026-03-02 01:01:00 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:00.267422 | orchestrator | 2026-03-02 01:01:00 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:00.267464 | orchestrator | 2026-03-02 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:03.293254 | orchestrator | 2026-03-02 01:01:03 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:03.293581 | orchestrator | 2026-03-02 01:01:03 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:03.294349 | orchestrator | 2026-03-02 01:01:03 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:03.294894 | orchestrator | 2026-03-02 01:01:03 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:03.297002 | orchestrator | 2026-03-02 01:01:03 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:03.297030 | orchestrator | 2026-03-02 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:06.326833 | orchestrator | 2026-03-02 01:01:06 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:06.327450 | orchestrator | 2026-03-02 01:01:06 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:06.328310 | orchestrator | 2026-03-02 01:01:06 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:06.329093 | orchestrator | 2026-03-02 01:01:06 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:06.329939 | orchestrator | 2026-03-02 01:01:06 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:06.330001 | orchestrator | 2026-03-02 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:09.365329 | orchestrator | 2026-03-02 01:01:09 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:09.365768 | orchestrator | 2026-03-02 01:01:09 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:09.367030 | orchestrator | 2026-03-02 01:01:09 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:09.369565 | orchestrator | 2026-03-02 01:01:09 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:09.369973 | orchestrator | 2026-03-02 01:01:09 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:09.370619 | orchestrator | 2026-03-02 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:12.393891 | orchestrator | 2026-03-02 01:01:12 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:12.394343 | orchestrator | 2026-03-02 01:01:12 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:12.395385 | orchestrator | 2026-03-02 01:01:12 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:12.395763 | orchestrator | 2026-03-02 01:01:12 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:12.397669 | orchestrator | 2026-03-02 01:01:12 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:12.397707 | orchestrator | 2026-03-02 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:15.430059 | orchestrator | 2026-03-02 01:01:15 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:15.431548 | orchestrator | 2026-03-02 01:01:15 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:15.432358 | orchestrator | 2026-03-02 01:01:15 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:15.433608 | orchestrator | 2026-03-02 01:01:15 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:15.434347 | orchestrator | 2026-03-02 01:01:15 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:15.434383 | orchestrator | 2026-03-02 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:18.466133 | orchestrator | 2026-03-02 01:01:18 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:18.466464 | orchestrator | 2026-03-02 01:01:18 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:18.467272 | orchestrator | 2026-03-02 01:01:18 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:18.467806 | orchestrator | 2026-03-02 01:01:18 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:18.468592 | orchestrator | 2026-03-02 01:01:18 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:18.468614 | orchestrator | 2026-03-02 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:21.545792 | orchestrator | 2026-03-02 01:01:21 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:21.545836 | orchestrator | 2026-03-02 01:01:21 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:21.545840 | orchestrator | 2026-03-02 01:01:21 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:21.545844 | orchestrator | 2026-03-02 01:01:21 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:21.545847 | orchestrator | 2026-03-02 01:01:21 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:21.545851 | orchestrator | 2026-03-02 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:24.535065 | orchestrator | 2026-03-02 01:01:24 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:24.535458 | orchestrator | 2026-03-02 01:01:24 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:24.536172 | orchestrator | 2026-03-02 01:01:24 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:24.537578 | orchestrator | 2026-03-02 01:01:24 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:24.538273 | orchestrator | 2026-03-02 01:01:24 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:24.538311 | orchestrator | 2026-03-02 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:27.598597 | orchestrator | 2026-03-02 01:01:27 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:27.598646 | orchestrator | 2026-03-02 01:01:27 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:27.598651 | orchestrator | 2026-03-02 01:01:27 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:27.598654 | orchestrator | 2026-03-02 01:01:27 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:27.598658 | orchestrator | 2026-03-02 01:01:27 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:27.598675 | orchestrator | 2026-03-02 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:30.586965 | orchestrator | 2026-03-02 01:01:30 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:30.588555 | orchestrator | 2026-03-02 01:01:30 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:30.589247 | orchestrator | 2026-03-02 01:01:30 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:30.589940 | orchestrator | 2026-03-02 01:01:30 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:30.590652 | orchestrator | 2026-03-02 01:01:30 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:30.590675 | orchestrator | 2026-03-02 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:33.613464 | orchestrator | 2026-03-02 01:01:33 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:33.613624 | orchestrator | 2026-03-02 01:01:33 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:33.614335 | orchestrator | 2026-03-02 01:01:33 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:33.614813 | orchestrator | 2026-03-02 01:01:33 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:33.615400 | orchestrator | 2026-03-02 01:01:33 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:33.615430 | orchestrator | 2026-03-02 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:36.638914 | orchestrator | 2026-03-02 01:01:36 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:36.639389 | orchestrator | 2026-03-02 01:01:36 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:36.639883 | orchestrator | 2026-03-02 01:01:36 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:36.640554 | orchestrator | 2026-03-02 01:01:36 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:36.641089 | orchestrator | 2026-03-02 01:01:36 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:36.641110 | orchestrator | 2026-03-02 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:39.666276 | orchestrator | 2026-03-02 01:01:39 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:39.666800 | orchestrator | 2026-03-02 01:01:39 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:39.667629 | orchestrator | 2026-03-02 01:01:39 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:39.668400 | orchestrator | 2026-03-02 01:01:39 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:39.669327 | orchestrator | 2026-03-02 01:01:39 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:39.669345 | orchestrator | 2026-03-02 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:42.699600 | orchestrator | 2026-03-02 01:01:42 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:42.700518 | orchestrator | 2026-03-02 01:01:42 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:42.703012 | orchestrator | 2026-03-02 01:01:42 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:42.705391 | orchestrator | 2026-03-02 01:01:42 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:42.705915 | orchestrator | 2026-03-02 01:01:42 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:42.706046 | orchestrator | 2026-03-02 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:45.730908 | orchestrator | 2026-03-02 01:01:45 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:45.731303 | orchestrator | 2026-03-02 01:01:45 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:45.732545 | orchestrator | 2026-03-02 01:01:45 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:45.733041 | orchestrator | 2026-03-02 01:01:45 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:45.733732 | orchestrator | 2026-03-02 01:01:45 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:45.733754 | orchestrator | 2026-03-02 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:48.763388 | orchestrator | 2026-03-02 01:01:48 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:48.763881 | orchestrator | 2026-03-02 01:01:48 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:48.764754 | orchestrator | 2026-03-02 01:01:48 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:48.766372 | orchestrator | 2026-03-02 01:01:48 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:48.766403 | orchestrator | 2026-03-02 01:01:48 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:48.766412 | orchestrator | 2026-03-02 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:51.799115 | orchestrator | 2026-03-02 01:01:51 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:51.800996 | orchestrator | 2026-03-02 01:01:51 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:51.802270 | orchestrator | 2026-03-02 01:01:51 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:51.804185 | orchestrator | 2026-03-02 01:01:51 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:51.806201 | orchestrator | 2026-03-02 01:01:51 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:51.806230 | orchestrator | 2026-03-02 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:54.846659 | orchestrator | 2026-03-02 01:01:54 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:54.846936 | orchestrator | 2026-03-02 01:01:54 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:54.847944 | orchestrator | 2026-03-02 01:01:54 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:54.848691 | orchestrator | 2026-03-02 01:01:54 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:54.850514 | orchestrator | 2026-03-02 01:01:54 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:54.850551 | orchestrator | 2026-03-02 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:01:57.895274 | orchestrator | 2026-03-02 01:01:57 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:01:57.895436 | orchestrator | 2026-03-02 01:01:57 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:01:57.898718 | orchestrator | 2026-03-02 01:01:57 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:01:57.900293 | orchestrator | 2026-03-02 01:01:57 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state STARTED 2026-03-02 01:01:57.900363 | orchestrator | 2026-03-02 01:01:57 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:01:57.900375 | orchestrator | 2026-03-02 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:00.928039 | orchestrator | 2026-03-02 01:02:00 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:00.929037 | orchestrator | 2026-03-02 01:02:00 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:00.929352 | orchestrator | 2026-03-02 01:02:00 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:00.930346 | orchestrator | 2026-03-02 01:02:00 | INFO  | Task 4213a73d-7787-4ac6-ac8f-2306ab19fbcc is in state SUCCESS 2026-03-02 01:02:00.930717 | orchestrator | 2026-03-02 01:02:00.930739 | orchestrator | 2026-03-02 01:02:00.930756 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-02 01:02:00.930815 | orchestrator | 2026-03-02 01:02:00.930821 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-02 01:02:00.930827 | orchestrator | Monday 02 March 2026 00:59:33 +0000 (0:00:00.207) 0:00:00.207 ********** 2026-03-02 01:02:00.930832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-02 01:02:00.930838 | orchestrator | 2026-03-02 01:02:00.930843 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-02 01:02:00.930851 | orchestrator | Monday 02 March 2026 00:59:33 +0000 (0:00:00.199) 0:00:00.406 ********** 2026-03-02 01:02:00.930862 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-02 01:02:00.930868 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-02 01:02:00.930873 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-02 01:02:00.930878 | orchestrator | 2026-03-02 01:02:00.930883 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-02 01:02:00.930889 | orchestrator | Monday 02 March 2026 00:59:34 +0000 (0:00:01.168) 0:00:01.575 ********** 2026-03-02 01:02:00.930894 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-02 01:02:00.930899 | orchestrator | 2026-03-02 01:02:00.930904 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-02 01:02:00.930909 | orchestrator | Monday 02 March 2026 00:59:35 +0000 (0:00:01.448) 0:00:03.023 ********** 2026-03-02 01:02:00.930914 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.930919 | orchestrator | 2026-03-02 01:02:00.930924 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-02 01:02:00.930929 | orchestrator | Monday 02 March 2026 00:59:36 +0000 (0:00:00.941) 0:00:03.965 ********** 2026-03-02 01:02:00.930934 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.930939 | orchestrator | 2026-03-02 01:02:00.930944 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-02 01:02:00.930949 | orchestrator | Monday 02 March 2026 00:59:37 +0000 (0:00:00.874) 0:00:04.839 ********** 2026-03-02 01:02:00.930954 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-02 01:02:00.930959 | orchestrator | ok: [testbed-manager] 2026-03-02 01:02:00.930965 | orchestrator | 2026-03-02 01:02:00.930970 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-02 01:02:00.930975 | orchestrator | Monday 02 March 2026 01:00:18 +0000 (0:00:40.338) 0:00:45.178 ********** 2026-03-02 01:02:00.931049 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-02 01:02:00.931058 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-02 01:02:00.931063 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-02 01:02:00.931069 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-02 01:02:00.931074 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-02 01:02:00.931078 | orchestrator | 2026-03-02 01:02:00.931083 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-02 01:02:00.931088 | orchestrator | Monday 02 March 2026 01:00:21 +0000 (0:00:03.677) 0:00:48.855 ********** 2026-03-02 01:02:00.931093 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-02 01:02:00.931099 | orchestrator | 2026-03-02 01:02:00.931103 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-02 01:02:00.931109 | orchestrator | Monday 02 March 2026 01:00:22 +0000 (0:00:00.399) 0:00:49.254 ********** 2026-03-02 01:02:00.931209 | orchestrator | skipping: [testbed-manager] 2026-03-02 01:02:00.931221 | orchestrator | 2026-03-02 01:02:00.931226 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-02 01:02:00.931232 | orchestrator | Monday 02 March 2026 01:00:22 +0000 (0:00:00.137) 0:00:49.391 ********** 2026-03-02 01:02:00.931237 | orchestrator | skipping: [testbed-manager] 2026-03-02 01:02:00.931244 | orchestrator | 2026-03-02 01:02:00.931251 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-02 01:02:00.931256 | orchestrator | Monday 02 March 2026 01:00:22 +0000 (0:00:00.390) 0:00:49.782 ********** 2026-03-02 01:02:00.931261 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931266 | orchestrator | 2026-03-02 01:02:00.931271 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-02 01:02:00.931276 | orchestrator | Monday 02 March 2026 01:00:23 +0000 (0:00:01.135) 0:00:50.917 ********** 2026-03-02 01:02:00.931281 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931286 | orchestrator | 2026-03-02 01:02:00.931290 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-02 01:02:00.931295 | orchestrator | Monday 02 March 2026 01:00:24 +0000 (0:00:00.662) 0:00:51.580 ********** 2026-03-02 01:02:00.931300 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931305 | orchestrator | 2026-03-02 01:02:00.931310 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-02 01:02:00.931315 | orchestrator | Monday 02 March 2026 01:00:24 +0000 (0:00:00.475) 0:00:52.055 ********** 2026-03-02 01:02:00.931323 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-02 01:02:00.931329 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-02 01:02:00.931333 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-02 01:02:00.931338 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-02 01:02:00.931343 | orchestrator | 2026-03-02 01:02:00.931348 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:02:00.931354 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-02 01:02:00.931360 | orchestrator | 2026-03-02 01:02:00.931365 | orchestrator | 2026-03-02 01:02:00.931379 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:02:00.931392 | orchestrator | Monday 02 March 2026 01:00:26 +0000 (0:00:01.153) 0:00:53.209 ********** 2026-03-02 01:02:00.931398 | orchestrator | =============================================================================== 2026-03-02 01:02:00.931403 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.34s 2026-03-02 01:02:00.931408 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.68s 2026-03-02 01:02:00.931413 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.45s 2026-03-02 01:02:00.931418 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.17s 2026-03-02 01:02:00.931423 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.15s 2026-03-02 01:02:00.931439 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.14s 2026-03-02 01:02:00.931445 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.94s 2026-03-02 01:02:00.931450 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.87s 2026-03-02 01:02:00.931455 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.66s 2026-03-02 01:02:00.931460 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.48s 2026-03-02 01:02:00.931465 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.40s 2026-03-02 01:02:00.931470 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.39s 2026-03-02 01:02:00.931476 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2026-03-02 01:02:00.931481 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-03-02 01:02:00.931497 | orchestrator | 2026-03-02 01:02:00.931503 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-02 01:02:00.931508 | orchestrator | 2.16.14 2026-03-02 01:02:00.931514 | orchestrator | 2026-03-02 01:02:00.931519 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-02 01:02:00.931526 | orchestrator | 2026-03-02 01:02:00.931532 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-02 01:02:00.931538 | orchestrator | Monday 02 March 2026 01:00:29 +0000 (0:00:00.197) 0:00:00.197 ********** 2026-03-02 01:02:00.931543 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931547 | orchestrator | 2026-03-02 01:02:00.931553 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-02 01:02:00.931558 | orchestrator | Monday 02 March 2026 01:00:30 +0000 (0:00:01.290) 0:00:01.487 ********** 2026-03-02 01:02:00.931563 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931573 | orchestrator | 2026-03-02 01:02:00.931578 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-02 01:02:00.931583 | orchestrator | Monday 02 March 2026 01:00:31 +0000 (0:00:00.895) 0:00:02.383 ********** 2026-03-02 01:02:00.931588 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931594 | orchestrator | 2026-03-02 01:02:00.931599 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-02 01:02:00.931604 | orchestrator | Monday 02 March 2026 01:00:32 +0000 (0:00:00.970) 0:00:03.353 ********** 2026-03-02 01:02:00.931610 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931613 | orchestrator | 2026-03-02 01:02:00.931617 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-02 01:02:00.931620 | orchestrator | Monday 02 March 2026 01:00:33 +0000 (0:00:00.991) 0:00:04.345 ********** 2026-03-02 01:02:00.931623 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931626 | orchestrator | 2026-03-02 01:02:00.931630 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-02 01:02:00.931636 | orchestrator | Monday 02 March 2026 01:00:34 +0000 (0:00:00.869) 0:00:05.215 ********** 2026-03-02 01:02:00.931642 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931650 | orchestrator | 2026-03-02 01:02:00.931654 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-02 01:02:00.931659 | orchestrator | Monday 02 March 2026 01:00:35 +0000 (0:00:00.932) 0:00:06.147 ********** 2026-03-02 01:02:00.931664 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931669 | orchestrator | 2026-03-02 01:02:00.931674 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-02 01:02:00.931679 | orchestrator | Monday 02 March 2026 01:00:36 +0000 (0:00:01.198) 0:00:07.346 ********** 2026-03-02 01:02:00.931684 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931689 | orchestrator | 2026-03-02 01:02:00.931694 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-02 01:02:00.931700 | orchestrator | Monday 02 March 2026 01:00:37 +0000 (0:00:01.070) 0:00:08.416 ********** 2026-03-02 01:02:00.931710 | orchestrator | changed: [testbed-manager] 2026-03-02 01:02:00.931717 | orchestrator | 2026-03-02 01:02:00.931724 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-02 01:02:00.931730 | orchestrator | Monday 02 March 2026 01:01:34 +0000 (0:00:56.240) 0:01:04.656 ********** 2026-03-02 01:02:00.931735 | orchestrator | skipping: [testbed-manager] 2026-03-02 01:02:00.931740 | orchestrator | 2026-03-02 01:02:00.931745 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-02 01:02:00.931752 | orchestrator | 2026-03-02 01:02:00.931757 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-02 01:02:00.931761 | orchestrator | Monday 02 March 2026 01:01:34 +0000 (0:00:00.125) 0:01:04.782 ********** 2026-03-02 01:02:00.931765 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:02:00.931769 | orchestrator | 2026-03-02 01:02:00.931787 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-02 01:02:00.931791 | orchestrator | 2026-03-02 01:02:00.931795 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-02 01:02:00.931799 | orchestrator | Monday 02 March 2026 01:01:45 +0000 (0:00:11.308) 0:01:16.091 ********** 2026-03-02 01:02:00.931803 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:02:00.931808 | orchestrator | 2026-03-02 01:02:00.931826 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-02 01:02:00.931833 | orchestrator | 2026-03-02 01:02:00.931838 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-02 01:02:00.931844 | orchestrator | Monday 02 March 2026 01:01:46 +0000 (0:00:01.389) 0:01:17.481 ********** 2026-03-02 01:02:00.931849 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:02:00.931854 | orchestrator | 2026-03-02 01:02:00.931860 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:02:00.931866 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-02 01:02:00.931870 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:02:00.931875 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:02:00.931879 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:02:00.931882 | orchestrator | 2026-03-02 01:02:00.931886 | orchestrator | 2026-03-02 01:02:00.931890 | orchestrator | 2026-03-02 01:02:00.931894 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:02:00.931899 | orchestrator | Monday 02 March 2026 01:01:58 +0000 (0:00:11.066) 0:01:28.547 ********** 2026-03-02 01:02:00.931904 | orchestrator | =============================================================================== 2026-03-02 01:02:00.931911 | orchestrator | Create admin user ------------------------------------------------------ 56.24s 2026-03-02 01:02:00.931918 | orchestrator | Restart ceph manager service ------------------------------------------- 23.76s 2026-03-02 01:02:00.931923 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.29s 2026-03-02 01:02:00.931928 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.20s 2026-03-02 01:02:00.931933 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.07s 2026-03-02 01:02:00.931938 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.99s 2026-03-02 01:02:00.931943 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.97s 2026-03-02 01:02:00.931948 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.93s 2026-03-02 01:02:00.931953 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.90s 2026-03-02 01:02:00.931964 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.87s 2026-03-02 01:02:00.931969 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2026-03-02 01:02:00.931976 | orchestrator | 2026-03-02 01:02:00 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:02:00.931980 | orchestrator | 2026-03-02 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:03.969323 | orchestrator | 2026-03-02 01:02:03 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:03.970948 | orchestrator | 2026-03-02 01:02:03 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:03.973443 | orchestrator | 2026-03-02 01:02:03 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:03.975038 | orchestrator | 2026-03-02 01:02:03 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:02:03.975088 | orchestrator | 2026-03-02 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:07.013856 | orchestrator | 2026-03-02 01:02:07 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:07.014614 | orchestrator | 2026-03-02 01:02:07 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:07.015588 | orchestrator | 2026-03-02 01:02:07 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:07.018185 | orchestrator | 2026-03-02 01:02:07 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:02:07.018240 | orchestrator | 2026-03-02 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:10.052736 | orchestrator | 2026-03-02 01:02:10 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:10.056053 | orchestrator | 2026-03-02 01:02:10 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:10.057928 | orchestrator | 2026-03-02 01:02:10 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:10.059692 | orchestrator | 2026-03-02 01:02:10 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:02:10.059745 | orchestrator | 2026-03-02 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:13.094333 | orchestrator | 2026-03-02 01:02:13 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:13.097952 | orchestrator | 2026-03-02 01:02:13 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:13.101677 | orchestrator | 2026-03-02 01:02:13 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:13.103487 | orchestrator | 2026-03-02 01:02:13 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:02:13.105220 | orchestrator | 2026-03-02 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:16.133320 | orchestrator | 2026-03-02 01:02:16 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:16.133927 | orchestrator | 2026-03-02 01:02:16 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:16.134825 | orchestrator | 2026-03-02 01:02:16 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:16.135592 | orchestrator | 2026-03-02 01:02:16 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:02:16.135618 | orchestrator | 2026-03-02 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:19.171073 | orchestrator | 2026-03-02 01:02:19 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:19.172313 | orchestrator | 2026-03-02 01:02:19 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:19.172872 | orchestrator | 2026-03-02 01:02:19 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:19.174690 | orchestrator | 2026-03-02 01:02:19 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:02:19.174725 | orchestrator | 2026-03-02 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:22.197515 | orchestrator | 2026-03-02 01:02:22 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:22.197565 | orchestrator | 2026-03-02 01:02:22 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:22.198735 | orchestrator | 2026-03-02 01:02:22 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:22.199222 | orchestrator | 2026-03-02 01:02:22 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:02:22.199278 | orchestrator | 2026-03-02 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:25.220329 | orchestrator | 2026-03-02 01:02:25 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:25.220553 | orchestrator | 2026-03-02 01:02:25 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:25.221258 | orchestrator | 2026-03-02 01:02:25 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:25.221865 | orchestrator | 2026-03-02 01:02:25 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:02:25.221883 | orchestrator | 2026-03-02 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:28.239379 | orchestrator | 2026-03-02 01:02:28 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:28.239456 | orchestrator | 2026-03-02 01:02:28 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:28.240743 | orchestrator | 2026-03-02 01:02:28 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:28.241228 | orchestrator | 2026-03-02 01:02:28 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state STARTED 2026-03-02 01:02:28.241246 | orchestrator | 2026-03-02 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:31.266203 | orchestrator | 2026-03-02 01:02:31 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:31.266380 | orchestrator | 2026-03-02 01:02:31 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:31.267806 | orchestrator | 2026-03-02 01:02:31 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:31.268626 | orchestrator | 2026-03-02 01:02:31 | INFO  | Task 1eaa4b39-072e-4751-a21e-2b06d801c38b is in state SUCCESS 2026-03-02 01:02:31.272935 | orchestrator | 2026-03-02 01:02:31.273000 | orchestrator | 2026-03-02 01:02:31.273015 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:02:31.273027 | orchestrator | 2026-03-02 01:02:31.273038 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:02:31.273102 | orchestrator | Monday 02 March 2026 01:00:25 +0000 (0:00:00.297) 0:00:00.297 ********** 2026-03-02 01:02:31.273122 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:02:31.273134 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:02:31.273143 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:02:31.273170 | orchestrator | 2026-03-02 01:02:31.273181 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:02:31.273241 | orchestrator | Monday 02 March 2026 01:00:25 +0000 (0:00:00.383) 0:00:00.681 ********** 2026-03-02 01:02:31.273248 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-02 01:02:31.273254 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-02 01:02:31.273259 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-02 01:02:31.273265 | orchestrator | 2026-03-02 01:02:31.273271 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-02 01:02:31.273277 | orchestrator | 2026-03-02 01:02:31.273282 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-02 01:02:31.273288 | orchestrator | Monday 02 March 2026 01:00:26 +0000 (0:00:00.593) 0:00:01.274 ********** 2026-03-02 01:02:31.273294 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:02:31.273300 | orchestrator | 2026-03-02 01:02:31.273306 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-02 01:02:31.273312 | orchestrator | Monday 02 March 2026 01:00:26 +0000 (0:00:00.528) 0:00:01.803 ********** 2026-03-02 01:02:31.273318 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-02 01:02:31.273323 | orchestrator | 2026-03-02 01:02:31.273329 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-02 01:02:31.273338 | orchestrator | Monday 02 March 2026 01:00:30 +0000 (0:00:03.756) 0:00:05.560 ********** 2026-03-02 01:02:31.273347 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-02 01:02:31.273360 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-02 01:02:31.273373 | orchestrator | 2026-03-02 01:02:31.273395 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-02 01:02:31.273405 | orchestrator | Monday 02 March 2026 01:00:37 +0000 (0:00:06.905) 0:00:12.466 ********** 2026-03-02 01:02:31.273414 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating projects (5 retries left). 2026-03-02 01:02:31.273425 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-02 01:02:31.273435 | orchestrator | 2026-03-02 01:02:31.273450 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-02 01:02:31.273475 | orchestrator | Monday 02 March 2026 01:00:53 +0000 (0:00:16.208) 0:00:28.674 ********** 2026-03-02 01:02:31.273539 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-02 01:02:31.273550 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-02 01:02:31.273559 | orchestrator | 2026-03-02 01:02:31.273568 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-02 01:02:31.273578 | orchestrator | Monday 02 March 2026 01:00:57 +0000 (0:00:03.687) 0:00:32.361 ********** 2026-03-02 01:02:31.273587 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-02 01:02:31.273597 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-02 01:02:31.273609 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-02 01:02:31.273623 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-02 01:02:31.273634 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-02 01:02:31.273645 | orchestrator | 2026-03-02 01:02:31.273655 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-02 01:02:31.273664 | orchestrator | Monday 02 March 2026 01:01:12 +0000 (0:00:15.639) 0:00:48.001 ********** 2026-03-02 01:02:31.273674 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-02 01:02:31.273683 | orchestrator | 2026-03-02 01:02:31.273695 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-02 01:02:31.273708 | orchestrator | Monday 02 March 2026 01:01:16 +0000 (0:00:03.675) 0:00:51.676 ********** 2026-03-02 01:02:31.273734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.273772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.273784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.273795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.273806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.273822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.273839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.273854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.273865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.273874 | orchestrator | 2026-03-02 01:02:31.273884 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-02 01:02:31.273894 | orchestrator | Monday 02 March 2026 01:01:18 +0000 (0:00:02.447) 0:00:54.123 ********** 2026-03-02 01:02:31.273903 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-02 01:02:31.273913 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-02 01:02:31.273922 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-02 01:02:31.273932 | orchestrator | 2026-03-02 01:02:31.273942 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-02 01:02:31.273951 | orchestrator | Monday 02 March 2026 01:01:19 +0000 (0:00:00.892) 0:00:55.016 ********** 2026-03-02 01:02:31.273961 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:02:31.273971 | orchestrator | 2026-03-02 01:02:31.273981 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-02 01:02:31.273990 | orchestrator | Monday 02 March 2026 01:01:20 +0000 (0:00:00.295) 0:00:55.311 ********** 2026-03-02 01:02:31.274002 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:02:31.274095 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:02:31.274110 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:02:31.274120 | orchestrator | 2026-03-02 01:02:31.274129 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-02 01:02:31.274153 | orchestrator | Monday 02 March 2026 01:01:20 +0000 (0:00:00.750) 0:00:56.061 ********** 2026-03-02 01:02:31.274164 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:02:31.274173 | orchestrator | 2026-03-02 01:02:31.274183 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-02 01:02:31.274191 | orchestrator | Monday 02 March 2026 01:01:22 +0000 (0:00:01.129) 0:00:57.191 ********** 2026-03-02 01:02:31.274197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.274217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.274224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.274230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274278 | orchestrator | 2026-03-02 01:02:31.274284 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-02 01:02:31.274290 | orchestrator | Monday 02 March 2026 01:01:25 +0000 (0:00:03.254) 0:01:00.445 ********** 2026-03-02 01:02:31.274296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 01:02:31.274307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274320 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:02:31.274330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 01:02:31.274342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274358 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:02:31.274365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 01:02:31.274380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274407 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:02:31.274416 | orchestrator | 2026-03-02 01:02:31.274425 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-02 01:02:31.274434 | orchestrator | Monday 02 March 2026 01:01:26 +0000 (0:00:01.689) 0:01:02.134 ********** 2026-03-02 01:02:31.274455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 01:02:31.274465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274493 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:02:31.274504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 01:02:31.274514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274536 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:02:31.274545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 01:02:31.274552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274568 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:02:31.274574 | orchestrator | 2026-03-02 01:02:31.274580 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-02 01:02:31.274586 | orchestrator | Monday 02 March 2026 01:01:28 +0000 (0:00:01.330) 0:01:03.465 ********** 2026-03-02 01:02:31.274592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.274601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.274611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.274621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274668 | orchestrator | 2026-03-02 01:02:31.274677 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-02 01:02:31.274687 | orchestrator | Monday 02 March 2026 01:01:31 +0000 (0:00:02.982) 0:01:06.447 ********** 2026-03-02 01:02:31.274699 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:02:31.274713 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:02:31.274723 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:02:31.274733 | orchestrator | 2026-03-02 01:02:31.274742 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-02 01:02:31.274750 | orchestrator | Monday 02 March 2026 01:01:34 +0000 (0:00:03.331) 0:01:09.778 ********** 2026-03-02 01:02:31.274760 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 01:02:31.274770 | orchestrator | 2026-03-02 01:02:31.274781 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-02 01:02:31.274791 | orchestrator | Monday 02 March 2026 01:01:35 +0000 (0:00:00.821) 0:01:10.599 ********** 2026-03-02 01:02:31.274801 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:02:31.274811 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:02:31.274821 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:02:31.274831 | orchestrator | 2026-03-02 01:02:31.274840 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-02 01:02:31.274847 | orchestrator | Monday 02 March 2026 01:01:36 +0000 (0:00:00.733) 0:01:11.333 ********** 2026-03-02 01:02:31.274853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.274859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.274877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.274890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.274930 | orchestrator | 2026-03-02 01:02:31.274941 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-02 01:02:31.274952 | orchestrator | Monday 02 March 2026 01:01:46 +0000 (0:00:10.068) 0:01:21.402 ********** 2026-03-02 01:02:31.274962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 01:02:31.274968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.274981 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:02:31.274987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 01:02:31.274993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.275010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.275016 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:02:31.275025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-02 01:02:31.275036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.275046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:02:31.275103 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:02:31.275114 | orchestrator | 2026-03-02 01:02:31.275124 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-02 01:02:31.275131 | orchestrator | Monday 02 March 2026 01:01:46 +0000 (0:00:00.534) 0:01:21.936 ********** 2026-03-02 01:02:31.275137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.275158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.275165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-02 01:02:31.275171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.275177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.275183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.275193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.275206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.275213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:02:31.275219 | orchestrator | 2026-03-02 01:02:31.275225 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-02 01:02:31.275231 | orchestrator | Monday 02 March 2026 01:01:49 +0000 (0:00:03.180) 0:01:25.116 ********** 2026-03-02 01:02:31.275238 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:02:31.275249 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:02:31.275259 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:02:31.275269 | orchestrator | 2026-03-02 01:02:31.275279 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-02 01:02:31.275288 | orchestrator | Monday 02 March 2026 01:01:50 +0000 (0:00:00.264) 0:01:25.381 ********** 2026-03-02 01:02:31.275297 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:02:31.275306 | orchestrator | 2026-03-02 01:02:31.275316 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-02 01:02:31.275326 | orchestrator | Monday 02 March 2026 01:01:52 +0000 (0:00:02.556) 0:01:27.937 ********** 2026-03-02 01:02:31.275336 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:02:31.275346 | orchestrator | 2026-03-02 01:02:31.275353 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-02 01:02:31.275359 | orchestrator | Monday 02 March 2026 01:01:55 +0000 (0:00:02.808) 0:01:30.746 ********** 2026-03-02 01:02:31.275365 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:02:31.275376 | orchestrator | 2026-03-02 01:02:31.275386 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-02 01:02:31.275395 | orchestrator | Monday 02 March 2026 01:02:06 +0000 (0:00:11.337) 0:01:42.084 ********** 2026-03-02 01:02:31.275405 | orchestrator | 2026-03-02 01:02:31.275415 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-02 01:02:31.275426 | orchestrator | Monday 02 March 2026 01:02:06 +0000 (0:00:00.064) 0:01:42.149 ********** 2026-03-02 01:02:31.275436 | orchestrator | 2026-03-02 01:02:31.275446 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-02 01:02:31.275456 | orchestrator | Monday 02 March 2026 01:02:07 +0000 (0:00:00.133) 0:01:42.282 ********** 2026-03-02 01:02:31.275470 | orchestrator | 2026-03-02 01:02:31.275481 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-02 01:02:31.275491 | orchestrator | Monday 02 March 2026 01:02:07 +0000 (0:00:00.047) 0:01:42.329 ********** 2026-03-02 01:02:31.275501 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:02:31.275511 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:02:31.275521 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:02:31.275531 | orchestrator | 2026-03-02 01:02:31.275542 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-02 01:02:31.275552 | orchestrator | Monday 02 March 2026 01:02:16 +0000 (0:00:08.937) 0:01:51.266 ********** 2026-03-02 01:02:31.275561 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:02:31.275572 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:02:31.275582 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:02:31.275592 | orchestrator | 2026-03-02 01:02:31.275601 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-02 01:02:31.275611 | orchestrator | Monday 02 March 2026 01:02:21 +0000 (0:00:05.673) 0:01:56.940 ********** 2026-03-02 01:02:31.275621 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:02:31.275630 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:02:31.275640 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:02:31.275650 | orchestrator | 2026-03-02 01:02:31.275659 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:02:31.275669 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 01:02:31.275678 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-02 01:02:31.275689 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-02 01:02:31.275699 | orchestrator | 2026-03-02 01:02:31.275709 | orchestrator | 2026-03-02 01:02:31.275718 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:02:31.275728 | orchestrator | Monday 02 March 2026 01:02:30 +0000 (0:00:08.644) 0:02:05.584 ********** 2026-03-02 01:02:31.275738 | orchestrator | =============================================================================== 2026-03-02 01:02:31.275755 | orchestrator | service-ks-register : barbican | Creating projects --------------------- 16.21s 2026-03-02 01:02:31.275766 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.64s 2026-03-02 01:02:31.275776 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.34s 2026-03-02 01:02:31.275792 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.07s 2026-03-02 01:02:31.275802 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.94s 2026-03-02 01:02:31.275812 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.64s 2026-03-02 01:02:31.275821 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.91s 2026-03-02 01:02:31.275830 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.67s 2026-03-02 01:02:31.275839 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.76s 2026-03-02 01:02:31.275850 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.69s 2026-03-02 01:02:31.275860 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.68s 2026-03-02 01:02:31.275870 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.33s 2026-03-02 01:02:31.275880 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.25s 2026-03-02 01:02:31.275890 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.18s 2026-03-02 01:02:31.275900 | orchestrator | barbican : Copying over config.json files for services ------------------ 2.98s 2026-03-02 01:02:31.275916 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.81s 2026-03-02 01:02:31.275927 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.56s 2026-03-02 01:02:31.275937 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.45s 2026-03-02 01:02:31.275946 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.69s 2026-03-02 01:02:31.275955 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.33s 2026-03-02 01:02:31.275965 | orchestrator | 2026-03-02 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:34.295733 | orchestrator | 2026-03-02 01:02:34 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:02:34.297370 | orchestrator | 2026-03-02 01:02:34 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:34.299030 | orchestrator | 2026-03-02 01:02:34 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:34.300555 | orchestrator | 2026-03-02 01:02:34 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:34.300820 | orchestrator | 2026-03-02 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:37.342386 | orchestrator | 2026-03-02 01:02:37 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:02:37.342435 | orchestrator | 2026-03-02 01:02:37 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:37.345256 | orchestrator | 2026-03-02 01:02:37 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:37.345970 | orchestrator | 2026-03-02 01:02:37 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:37.348219 | orchestrator | 2026-03-02 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:40.406171 | orchestrator | 2026-03-02 01:02:40 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:02:40.407315 | orchestrator | 2026-03-02 01:02:40 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:40.408424 | orchestrator | 2026-03-02 01:02:40 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:40.409603 | orchestrator | 2026-03-02 01:02:40 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:40.409636 | orchestrator | 2026-03-02 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:43.458363 | orchestrator | 2026-03-02 01:02:43 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:02:43.458958 | orchestrator | 2026-03-02 01:02:43 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:43.459817 | orchestrator | 2026-03-02 01:02:43 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:43.460567 | orchestrator | 2026-03-02 01:02:43 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:43.460633 | orchestrator | 2026-03-02 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:46.511998 | orchestrator | 2026-03-02 01:02:46 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:02:46.513937 | orchestrator | 2026-03-02 01:02:46 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:46.516047 | orchestrator | 2026-03-02 01:02:46 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:46.517956 | orchestrator | 2026-03-02 01:02:46 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:46.518481 | orchestrator | 2026-03-02 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:49.568573 | orchestrator | 2026-03-02 01:02:49 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:02:49.570204 | orchestrator | 2026-03-02 01:02:49 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:49.570738 | orchestrator | 2026-03-02 01:02:49 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:49.571940 | orchestrator | 2026-03-02 01:02:49 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:49.571958 | orchestrator | 2026-03-02 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:52.611996 | orchestrator | 2026-03-02 01:02:52 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:02:52.614461 | orchestrator | 2026-03-02 01:02:52 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:52.616696 | orchestrator | 2026-03-02 01:02:52 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:52.618458 | orchestrator | 2026-03-02 01:02:52 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:52.618584 | orchestrator | 2026-03-02 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:55.667267 | orchestrator | 2026-03-02 01:02:55 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:02:55.669030 | orchestrator | 2026-03-02 01:02:55 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:55.670851 | orchestrator | 2026-03-02 01:02:55 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:55.672217 | orchestrator | 2026-03-02 01:02:55 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:55.672242 | orchestrator | 2026-03-02 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:02:58.705919 | orchestrator | 2026-03-02 01:02:58 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:02:58.706514 | orchestrator | 2026-03-02 01:02:58 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:02:58.707196 | orchestrator | 2026-03-02 01:02:58 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:02:58.708289 | orchestrator | 2026-03-02 01:02:58 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:02:58.708328 | orchestrator | 2026-03-02 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:01.741077 | orchestrator | 2026-03-02 01:03:01 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:01.743465 | orchestrator | 2026-03-02 01:03:01 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:01.746049 | orchestrator | 2026-03-02 01:03:01 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:01.747620 | orchestrator | 2026-03-02 01:03:01 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:03:01.747738 | orchestrator | 2026-03-02 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:04.792849 | orchestrator | 2026-03-02 01:03:04 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:04.793168 | orchestrator | 2026-03-02 01:03:04 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:04.795570 | orchestrator | 2026-03-02 01:03:04 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:04.796205 | orchestrator | 2026-03-02 01:03:04 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:03:04.796246 | orchestrator | 2026-03-02 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:07.832698 | orchestrator | 2026-03-02 01:03:07 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:07.832788 | orchestrator | 2026-03-02 01:03:07 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:07.832794 | orchestrator | 2026-03-02 01:03:07 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:07.834231 | orchestrator | 2026-03-02 01:03:07 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:03:07.834300 | orchestrator | 2026-03-02 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:10.874331 | orchestrator | 2026-03-02 01:03:10 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:10.875307 | orchestrator | 2026-03-02 01:03:10 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:10.877238 | orchestrator | 2026-03-02 01:03:10 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:10.878442 | orchestrator | 2026-03-02 01:03:10 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:03:10.878570 | orchestrator | 2026-03-02 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:13.923463 | orchestrator | 2026-03-02 01:03:13 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:13.923522 | orchestrator | 2026-03-02 01:03:13 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:13.924915 | orchestrator | 2026-03-02 01:03:13 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:13.926176 | orchestrator | 2026-03-02 01:03:13 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state STARTED 2026-03-02 01:03:13.928438 | orchestrator | 2026-03-02 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:16.975479 | orchestrator | 2026-03-02 01:03:16 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:16.977888 | orchestrator | 2026-03-02 01:03:16 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:16.979475 | orchestrator | 2026-03-02 01:03:16 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:16.984295 | orchestrator | 2026-03-02 01:03:16 | INFO  | Task b02f6b27-7397-4d55-8bab-61a2587fc743 is in state SUCCESS 2026-03-02 01:03:16.986166 | orchestrator | 2026-03-02 01:03:16.986211 | orchestrator | 2026-03-02 01:03:16.986255 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:03:16.986264 | orchestrator | 2026-03-02 01:03:16.986268 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:03:16.986272 | orchestrator | Monday 02 March 2026 01:00:24 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-03-02 01:03:16.986276 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:03:16.986281 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:03:16.986284 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:03:16.986288 | orchestrator | 2026-03-02 01:03:16.986292 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:03:16.986296 | orchestrator | Monday 02 March 2026 01:00:25 +0000 (0:00:00.308) 0:00:00.566 ********** 2026-03-02 01:03:16.986301 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-02 01:03:16.986361 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-02 01:03:16.986369 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-02 01:03:16.986402 | orchestrator | 2026-03-02 01:03:16.986409 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-02 01:03:16.986416 | orchestrator | 2026-03-02 01:03:16.986422 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-02 01:03:16.986429 | orchestrator | Monday 02 March 2026 01:00:25 +0000 (0:00:00.595) 0:00:01.162 ********** 2026-03-02 01:03:16.986647 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:03:16.986660 | orchestrator | 2026-03-02 01:03:16.986666 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-02 01:03:16.986672 | orchestrator | Monday 02 March 2026 01:00:26 +0000 (0:00:00.705) 0:00:01.867 ********** 2026-03-02 01:03:16.986679 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-02 01:03:16.986685 | orchestrator | 2026-03-02 01:03:16.986691 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-02 01:03:16.986698 | orchestrator | Monday 02 March 2026 01:00:30 +0000 (0:00:03.789) 0:00:05.657 ********** 2026-03-02 01:03:16.986705 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-02 01:03:16.986742 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-02 01:03:16.986749 | orchestrator | 2026-03-02 01:03:16.986756 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-02 01:03:16.986762 | orchestrator | Monday 02 March 2026 01:00:36 +0000 (0:00:06.819) 0:00:12.477 ********** 2026-03-02 01:03:16.986768 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-02 01:03:16.986774 | orchestrator | 2026-03-02 01:03:16.986781 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-02 01:03:16.986786 | orchestrator | Monday 02 March 2026 01:00:40 +0000 (0:00:03.624) 0:00:16.101 ********** 2026-03-02 01:03:16.986856 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-02 01:03:16.986863 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-02 01:03:16.986870 | orchestrator | 2026-03-02 01:03:16.986876 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-02 01:03:16.986882 | orchestrator | Monday 02 March 2026 01:00:44 +0000 (0:00:04.346) 0:00:20.448 ********** 2026-03-02 01:03:16.986897 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-02 01:03:16.986903 | orchestrator | 2026-03-02 01:03:16.986909 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-02 01:03:16.986915 | orchestrator | Monday 02 March 2026 01:00:48 +0000 (0:00:03.654) 0:00:24.103 ********** 2026-03-02 01:03:16.986921 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-02 01:03:16.986928 | orchestrator | 2026-03-02 01:03:16.986934 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-02 01:03:16.986940 | orchestrator | Monday 02 March 2026 01:00:52 +0000 (0:00:03.638) 0:00:27.741 ********** 2026-03-02 01:03:16.986949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.987007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.987015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.987023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.987781 | orchestrator | 2026-03-02 01:03:16.987788 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-02 01:03:16.987795 | orchestrator | Monday 02 March 2026 01:00:54 +0000 (0:00:02.652) 0:00:30.393 ********** 2026-03-02 01:03:16.987801 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:16.987807 | orchestrator | 2026-03-02 01:03:16.987818 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-02 01:03:16.987825 | orchestrator | Monday 02 March 2026 01:00:55 +0000 (0:00:00.123) 0:00:30.516 ********** 2026-03-02 01:03:16.987831 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:16.987838 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:03:16.987844 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:03:16.987850 | orchestrator | 2026-03-02 01:03:16.987856 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-02 01:03:16.987870 | orchestrator | Monday 02 March 2026 01:00:55 +0000 (0:00:00.254) 0:00:30.771 ********** 2026-03-02 01:03:16.987876 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:03:16.987883 | orchestrator | 2026-03-02 01:03:16.987889 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-02 01:03:16.987895 | orchestrator | Monday 02 March 2026 01:00:55 +0000 (0:00:00.602) 0:00:31.373 ********** 2026-03-02 01:03:16.987902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.987933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.987941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.987947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988132 | orchestrator | 2026-03-02 01:03:16.988136 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-02 01:03:16.988140 | orchestrator | Monday 02 March 2026 01:01:01 +0000 (0:00:05.611) 0:00:36.985 ********** 2026-03-02 01:03:16.988144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 01:03:16.988164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988185 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:16.988189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 01:03:16.988208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988227 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:03:16.988233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 01:03:16.988252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988273 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:03:16.988277 | orchestrator | 2026-03-02 01:03:16.988281 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-02 01:03:16.988285 | orchestrator | Monday 02 March 2026 01:01:02 +0000 (0:00:00.714) 0:00:37.700 ********** 2026-03-02 01:03:16.988291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 01:03:16.988310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988330 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:16.988335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 01:03:16.988353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988372 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:03:16.988378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 01:03:16.988387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988422 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:03:16.988427 | orchestrator | 2026-03-02 01:03:16.988431 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-02 01:03:16.988436 | orchestrator | Monday 02 March 2026 01:01:03 +0000 (0:00:01.244) 0:00:38.944 ********** 2026-03-02 01:03:16.988442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.988447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.988463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.988468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988574 | orchestrator | 2026-03-02 01:03:16.988579 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-02 01:03:16.988583 | orchestrator | Monday 02 March 2026 01:01:09 +0000 (0:00:05.996) 0:00:44.941 ********** 2026-03-02 01:03:16.988588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.988595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.988600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.988609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988697 | orchestrator | 2026-03-02 01:03:16.988702 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-02 01:03:16.988706 | orchestrator | Monday 02 March 2026 01:01:29 +0000 (0:00:19.750) 0:01:04.691 ********** 2026-03-02 01:03:16.988711 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-02 01:03:16.988715 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-02 01:03:16.988719 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-02 01:03:16.988724 | orchestrator | 2026-03-02 01:03:16.988729 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-02 01:03:16.988734 | orchestrator | Monday 02 March 2026 01:01:35 +0000 (0:00:06.233) 0:01:10.924 ********** 2026-03-02 01:03:16.988738 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-02 01:03:16.988743 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-02 01:03:16.988747 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-02 01:03:16.988751 | orchestrator | 2026-03-02 01:03:16.988756 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-02 01:03:16.988761 | orchestrator | Monday 02 March 2026 01:01:39 +0000 (0:00:04.072) 0:01:14.997 ********** 2026-03-02 01:03:16.988768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988900 | orchestrator | 2026-03-02 01:03:16.988907 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-02 01:03:16.988913 | orchestrator | Monday 02 March 2026 01:01:43 +0000 (0:00:03.795) 0:01:18.793 ********** 2026-03-02 01:03:16.988920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.988949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.988975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.988989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989094 | orchestrator | 2026-03-02 01:03:16.989101 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-02 01:03:16.989107 | orchestrator | Monday 02 March 2026 01:01:46 +0000 (0:00:03.315) 0:01:22.109 ********** 2026-03-02 01:03:16.989113 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:16.989120 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:03:16.989126 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:03:16.989133 | orchestrator | 2026-03-02 01:03:16.989139 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-02 01:03:16.989145 | orchestrator | Monday 02 March 2026 01:01:47 +0000 (0:00:00.479) 0:01:22.588 ********** 2026-03-02 01:03:16.989152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.989165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 01:03:16.989171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989194 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:16.989198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.989207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 01:03:16.989211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989237 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:03:16.989243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-02 01:03:16.989254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-02 01:03:16.989267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:03:16.989296 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:03:16.989303 | orchestrator | 2026-03-02 01:03:16.989309 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-02 01:03:16.989315 | orchestrator | Monday 02 March 2026 01:01:48 +0000 (0:00:01.169) 0:01:23.763 ********** 2026-03-02 01:03:16.989322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.989333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.989340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-02 01:03:16.989347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:03:16.989585 | orchestrator | 2026-03-02 01:03:16.989592 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-02 01:03:16.989599 | orchestrator | Monday 02 March 2026 01:01:53 +0000 (0:00:04.794) 0:01:28.557 ********** 2026-03-02 01:03:16.989605 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:16.989611 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:03:16.989623 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:03:16.989629 | orchestrator | 2026-03-02 01:03:16.989635 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-02 01:03:16.989642 | orchestrator | Monday 02 March 2026 01:01:53 +0000 (0:00:00.413) 0:01:28.971 ********** 2026-03-02 01:03:16.989648 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-02 01:03:16.989655 | orchestrator | 2026-03-02 01:03:16.989661 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-02 01:03:16.989667 | orchestrator | Monday 02 March 2026 01:01:56 +0000 (0:00:02.543) 0:01:31.515 ********** 2026-03-02 01:03:16.989673 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-02 01:03:16.989679 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-02 01:03:16.989686 | orchestrator | 2026-03-02 01:03:16.989692 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-02 01:03:16.989698 | orchestrator | Monday 02 March 2026 01:01:58 +0000 (0:00:02.940) 0:01:34.455 ********** 2026-03-02 01:03:16.989705 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:16.989711 | orchestrator | 2026-03-02 01:03:16.989717 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-02 01:03:16.989724 | orchestrator | Monday 02 March 2026 01:02:14 +0000 (0:00:15.179) 0:01:49.636 ********** 2026-03-02 01:03:16.989730 | orchestrator | 2026-03-02 01:03:16.989736 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-02 01:03:16.989742 | orchestrator | Monday 02 March 2026 01:02:14 +0000 (0:00:00.125) 0:01:49.761 ********** 2026-03-02 01:03:16.989749 | orchestrator | 2026-03-02 01:03:16.989755 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-02 01:03:16.989761 | orchestrator | Monday 02 March 2026 01:02:14 +0000 (0:00:00.142) 0:01:49.904 ********** 2026-03-02 01:03:16.989767 | orchestrator | 2026-03-02 01:03:16.989773 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-02 01:03:16.989780 | orchestrator | Monday 02 March 2026 01:02:14 +0000 (0:00:00.074) 0:01:49.978 ********** 2026-03-02 01:03:16.989786 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:03:16.989792 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:16.989796 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:03:16.989800 | orchestrator | 2026-03-02 01:03:16.989804 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-02 01:03:16.989808 | orchestrator | Monday 02 March 2026 01:02:27 +0000 (0:00:12.922) 0:02:02.901 ********** 2026-03-02 01:03:16.989811 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:16.989815 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:03:16.989819 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:03:16.989822 | orchestrator | 2026-03-02 01:03:16.989826 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-02 01:03:16.989830 | orchestrator | Monday 02 March 2026 01:02:37 +0000 (0:00:09.866) 0:02:12.768 ********** 2026-03-02 01:03:16.989836 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:16.989840 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:03:16.989844 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:03:16.989848 | orchestrator | 2026-03-02 01:03:16.989851 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-02 01:03:16.989855 | orchestrator | Monday 02 March 2026 01:02:42 +0000 (0:00:05.079) 0:02:17.847 ********** 2026-03-02 01:03:16.989859 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:16.989862 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:03:16.989866 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:03:16.989870 | orchestrator | 2026-03-02 01:03:16.989874 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-02 01:03:16.989877 | orchestrator | Monday 02 March 2026 01:02:48 +0000 (0:00:05.850) 0:02:23.698 ********** 2026-03-02 01:03:16.989881 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:16.989885 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:03:16.989892 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:03:16.989895 | orchestrator | 2026-03-02 01:03:16.989899 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-02 01:03:16.989903 | orchestrator | Monday 02 March 2026 01:02:58 +0000 (0:00:10.051) 0:02:33.750 ********** 2026-03-02 01:03:16.989906 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:16.989910 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:03:16.989914 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:03:16.989917 | orchestrator | 2026-03-02 01:03:16.989921 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-02 01:03:16.989925 | orchestrator | Monday 02 March 2026 01:03:08 +0000 (0:00:09.937) 0:02:43.687 ********** 2026-03-02 01:03:16.989928 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:16.989932 | orchestrator | 2026-03-02 01:03:16.989936 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:03:16.989940 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 01:03:16.989944 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-02 01:03:16.989948 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-02 01:03:16.989952 | orchestrator | 2026-03-02 01:03:16.989971 | orchestrator | 2026-03-02 01:03:16.989981 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:03:16.989988 | orchestrator | Monday 02 March 2026 01:03:15 +0000 (0:00:07.271) 0:02:50.958 ********** 2026-03-02 01:03:16.989995 | orchestrator | =============================================================================== 2026-03-02 01:03:16.990001 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.75s 2026-03-02 01:03:16.990007 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.18s 2026-03-02 01:03:16.990053 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.92s 2026-03-02 01:03:16.990059 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.05s 2026-03-02 01:03:16.990063 | orchestrator | designate : Restart designate-worker container -------------------------- 9.94s 2026-03-02 01:03:16.990067 | orchestrator | designate : Restart designate-api container ----------------------------- 9.87s 2026-03-02 01:03:16.990071 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.27s 2026-03-02 01:03:16.990075 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.82s 2026-03-02 01:03:16.990078 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.23s 2026-03-02 01:03:16.990082 | orchestrator | designate : Copying over config.json files for services ----------------- 6.00s 2026-03-02 01:03:16.990086 | orchestrator | designate : Restart designate-producer container ------------------------ 5.85s 2026-03-02 01:03:16.990090 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.61s 2026-03-02 01:03:16.990093 | orchestrator | designate : Restart designate-central container ------------------------- 5.08s 2026-03-02 01:03:16.990097 | orchestrator | designate : Check designate containers ---------------------------------- 4.79s 2026-03-02 01:03:16.990101 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.35s 2026-03-02 01:03:16.990105 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.07s 2026-03-02 01:03:16.990108 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.80s 2026-03-02 01:03:16.990112 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.79s 2026-03-02 01:03:16.990116 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.65s 2026-03-02 01:03:16.990120 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.64s 2026-03-02 01:03:16.990127 | orchestrator | 2026-03-02 01:03:16 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:16.990131 | orchestrator | 2026-03-02 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:20.030188 | orchestrator | 2026-03-02 01:03:20 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:20.031305 | orchestrator | 2026-03-02 01:03:20 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:20.032483 | orchestrator | 2026-03-02 01:03:20 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:20.033584 | orchestrator | 2026-03-02 01:03:20 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:20.033631 | orchestrator | 2026-03-02 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:23.064530 | orchestrator | 2026-03-02 01:03:23 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:23.065513 | orchestrator | 2026-03-02 01:03:23 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:23.066350 | orchestrator | 2026-03-02 01:03:23 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:23.068339 | orchestrator | 2026-03-02 01:03:23 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:23.068409 | orchestrator | 2026-03-02 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:26.106415 | orchestrator | 2026-03-02 01:03:26 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:26.108005 | orchestrator | 2026-03-02 01:03:26 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:26.109270 | orchestrator | 2026-03-02 01:03:26 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:26.110396 | orchestrator | 2026-03-02 01:03:26 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:26.110449 | orchestrator | 2026-03-02 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:29.151730 | orchestrator | 2026-03-02 01:03:29 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:29.154646 | orchestrator | 2026-03-02 01:03:29 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:29.157031 | orchestrator | 2026-03-02 01:03:29 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:29.159339 | orchestrator | 2026-03-02 01:03:29 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:29.159389 | orchestrator | 2026-03-02 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:32.203236 | orchestrator | 2026-03-02 01:03:32 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:32.204299 | orchestrator | 2026-03-02 01:03:32 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:32.207168 | orchestrator | 2026-03-02 01:03:32 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:32.209745 | orchestrator | 2026-03-02 01:03:32 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:32.209795 | orchestrator | 2026-03-02 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:35.258191 | orchestrator | 2026-03-02 01:03:35 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:35.260263 | orchestrator | 2026-03-02 01:03:35 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:35.262180 | orchestrator | 2026-03-02 01:03:35 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:35.263932 | orchestrator | 2026-03-02 01:03:35 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:35.263986 | orchestrator | 2026-03-02 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:38.306683 | orchestrator | 2026-03-02 01:03:38 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:38.307338 | orchestrator | 2026-03-02 01:03:38 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:38.308554 | orchestrator | 2026-03-02 01:03:38 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:38.309292 | orchestrator | 2026-03-02 01:03:38 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:38.309324 | orchestrator | 2026-03-02 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:41.367803 | orchestrator | 2026-03-02 01:03:41 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state STARTED 2026-03-02 01:03:41.371009 | orchestrator | 2026-03-02 01:03:41 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:41.373336 | orchestrator | 2026-03-02 01:03:41 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:41.375347 | orchestrator | 2026-03-02 01:03:41 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:41.375598 | orchestrator | 2026-03-02 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:44.411335 | orchestrator | 2026-03-02 01:03:44 | INFO  | Task dd74955a-a46f-4b58-ba55-e2111c854aea is in state SUCCESS 2026-03-02 01:03:44.412246 | orchestrator | 2026-03-02 01:03:44.412276 | orchestrator | 2026-03-02 01:03:44.412283 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:03:44.412289 | orchestrator | 2026-03-02 01:03:44.412295 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:03:44.412301 | orchestrator | Monday 02 March 2026 01:02:34 +0000 (0:00:00.230) 0:00:00.230 ********** 2026-03-02 01:03:44.412307 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:03:44.412313 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:03:44.412319 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:03:44.412325 | orchestrator | 2026-03-02 01:03:44.412331 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:03:44.412336 | orchestrator | Monday 02 March 2026 01:02:35 +0000 (0:00:00.284) 0:00:00.515 ********** 2026-03-02 01:03:44.412342 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-02 01:03:44.412348 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-02 01:03:44.412353 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-02 01:03:44.412359 | orchestrator | 2026-03-02 01:03:44.412364 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-02 01:03:44.412369 | orchestrator | 2026-03-02 01:03:44.412375 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-02 01:03:44.412380 | orchestrator | Monday 02 March 2026 01:02:35 +0000 (0:00:00.334) 0:00:00.849 ********** 2026-03-02 01:03:44.412386 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:03:44.412392 | orchestrator | 2026-03-02 01:03:44.412397 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-02 01:03:44.412403 | orchestrator | Monday 02 March 2026 01:02:36 +0000 (0:00:00.471) 0:00:01.321 ********** 2026-03-02 01:03:44.412408 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-02 01:03:44.412428 | orchestrator | 2026-03-02 01:03:44.412433 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-02 01:03:44.412438 | orchestrator | Monday 02 March 2026 01:02:39 +0000 (0:00:03.800) 0:00:05.121 ********** 2026-03-02 01:03:44.412444 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-02 01:03:44.412450 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-02 01:03:44.412456 | orchestrator | 2026-03-02 01:03:44.412461 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-02 01:03:44.412466 | orchestrator | Monday 02 March 2026 01:02:46 +0000 (0:00:06.838) 0:00:11.960 ********** 2026-03-02 01:03:44.412472 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-02 01:03:44.412477 | orchestrator | 2026-03-02 01:03:44.412482 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-02 01:03:44.412487 | orchestrator | Monday 02 March 2026 01:02:50 +0000 (0:00:03.732) 0:00:15.693 ********** 2026-03-02 01:03:44.412493 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-02 01:03:44.412498 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-02 01:03:44.412503 | orchestrator | 2026-03-02 01:03:44.412508 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-02 01:03:44.412514 | orchestrator | Monday 02 March 2026 01:02:54 +0000 (0:00:03.767) 0:00:19.460 ********** 2026-03-02 01:03:44.412519 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-02 01:03:44.412524 | orchestrator | 2026-03-02 01:03:44.412529 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-02 01:03:44.412535 | orchestrator | Monday 02 March 2026 01:02:56 +0000 (0:00:02.702) 0:00:22.162 ********** 2026-03-02 01:03:44.412540 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-02 01:03:44.412545 | orchestrator | 2026-03-02 01:03:44.412550 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-02 01:03:44.412556 | orchestrator | Monday 02 March 2026 01:03:00 +0000 (0:00:03.561) 0:00:25.723 ********** 2026-03-02 01:03:44.412561 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:44.412567 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:03:44.412572 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:03:44.412577 | orchestrator | 2026-03-02 01:03:44.412582 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-02 01:03:44.412588 | orchestrator | Monday 02 March 2026 01:03:00 +0000 (0:00:00.260) 0:00:25.984 ********** 2026-03-02 01:03:44.412595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412628 | orchestrator | 2026-03-02 01:03:44.412633 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-02 01:03:44.412638 | orchestrator | Monday 02 March 2026 01:03:01 +0000 (0:00:00.752) 0:00:26.737 ********** 2026-03-02 01:03:44.412643 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:44.412648 | orchestrator | 2026-03-02 01:03:44.412654 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-02 01:03:44.412659 | orchestrator | Monday 02 March 2026 01:03:01 +0000 (0:00:00.118) 0:00:26.855 ********** 2026-03-02 01:03:44.412665 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:44.412670 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:03:44.412675 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:03:44.412680 | orchestrator | 2026-03-02 01:03:44.412685 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-02 01:03:44.412690 | orchestrator | Monday 02 March 2026 01:03:01 +0000 (0:00:00.350) 0:00:27.206 ********** 2026-03-02 01:03:44.412696 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:03:44.412701 | orchestrator | 2026-03-02 01:03:44.412706 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-02 01:03:44.412711 | orchestrator | Monday 02 March 2026 01:03:02 +0000 (0:00:00.523) 0:00:27.730 ********** 2026-03-02 01:03:44.412717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412741 | orchestrator | 2026-03-02 01:03:44.412746 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-02 01:03:44.412752 | orchestrator | Monday 02 March 2026 01:03:03 +0000 (0:00:01.329) 0:00:29.059 ********** 2026-03-02 01:03:44.412757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 01:03:44.412762 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:44.412768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 01:03:44.412773 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:03:44.412783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 01:03:44.412793 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:03:44.412799 | orchestrator | 2026-03-02 01:03:44.412804 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-02 01:03:44.412809 | orchestrator | Monday 02 March 2026 01:03:04 +0000 (0:00:00.866) 0:00:29.926 ********** 2026-03-02 01:03:44.412814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 01:03:44.412819 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:44.412825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 01:03:44.412830 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:03:44.412835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 01:03:44.412841 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:03:44.412846 | orchestrator | 2026-03-02 01:03:44.412851 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-02 01:03:44.412856 | orchestrator | Monday 02 March 2026 01:03:05 +0000 (0:00:00.782) 0:00:30.709 ********** 2026-03-02 01:03:44.412869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412885 | orchestrator | 2026-03-02 01:03:44.412890 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-02 01:03:44.412895 | orchestrator | Monday 02 March 2026 01:03:06 +0000 (0:00:01.517) 0:00:32.227 ********** 2026-03-02 01:03:44.412966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.412991 | orchestrator | 2026-03-02 01:03:44.412996 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-02 01:03:44.413001 | orchestrator | Monday 02 March 2026 01:03:09 +0000 (0:00:02.624) 0:00:34.851 ********** 2026-03-02 01:03:44.413007 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-02 01:03:44.413013 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-02 01:03:44.413018 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-02 01:03:44.413023 | orchestrator | 2026-03-02 01:03:44.413029 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-02 01:03:44.413034 | orchestrator | Monday 02 March 2026 01:03:11 +0000 (0:00:01.495) 0:00:36.347 ********** 2026-03-02 01:03:44.413039 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:44.413045 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:03:44.413049 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:03:44.413054 | orchestrator | 2026-03-02 01:03:44.413059 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-02 01:03:44.413064 | orchestrator | Monday 02 March 2026 01:03:12 +0000 (0:00:01.236) 0:00:37.583 ********** 2026-03-02 01:03:44.413070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 01:03:44.413079 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:03:44.413085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 01:03:44.413090 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:03:44.413099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-02 01:03:44.413104 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:03:44.413109 | orchestrator | 2026-03-02 01:03:44.413113 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-02 01:03:44.413118 | orchestrator | Monday 02 March 2026 01:03:12 +0000 (0:00:00.433) 0:00:38.017 ********** 2026-03-02 01:03:44.413123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.413128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.413141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-02 01:03:44.413147 | orchestrator | 2026-03-02 01:03:44.413153 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-02 01:03:44.413158 | orchestrator | Monday 02 March 2026 01:03:13 +0000 (0:00:00.960) 0:00:38.978 ********** 2026-03-02 01:03:44.413163 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:44.413168 | orchestrator | 2026-03-02 01:03:44.413173 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-02 01:03:44.413178 | orchestrator | Monday 02 March 2026 01:03:16 +0000 (0:00:02.675) 0:00:41.653 ********** 2026-03-02 01:03:44.413183 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:44.413188 | orchestrator | 2026-03-02 01:03:44.413192 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-02 01:03:44.413197 | orchestrator | Monday 02 March 2026 01:03:18 +0000 (0:00:02.235) 0:00:43.889 ********** 2026-03-02 01:03:44.413202 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:44.413206 | orchestrator | 2026-03-02 01:03:44.413211 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-02 01:03:44.413216 | orchestrator | Monday 02 March 2026 01:03:32 +0000 (0:00:13.544) 0:00:57.434 ********** 2026-03-02 01:03:44.413221 | orchestrator | 2026-03-02 01:03:44.413227 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-02 01:03:44.413232 | orchestrator | Monday 02 March 2026 01:03:32 +0000 (0:00:00.065) 0:00:57.499 ********** 2026-03-02 01:03:44.413236 | orchestrator | 2026-03-02 01:03:44.413244 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-02 01:03:44.413250 | orchestrator | Monday 02 March 2026 01:03:32 +0000 (0:00:00.064) 0:00:57.563 ********** 2026-03-02 01:03:44.413255 | orchestrator | 2026-03-02 01:03:44.413260 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-02 01:03:44.413265 | orchestrator | Monday 02 March 2026 01:03:32 +0000 (0:00:00.064) 0:00:57.628 ********** 2026-03-02 01:03:44.413270 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:03:44.413275 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:03:44.413279 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:03:44.413284 | orchestrator | 2026-03-02 01:03:44.413289 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:03:44.413295 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-02 01:03:44.413301 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-02 01:03:44.413306 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-02 01:03:44.413310 | orchestrator | 2026-03-02 01:03:44.413315 | orchestrator | 2026-03-02 01:03:44.413321 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:03:44.413326 | orchestrator | Monday 02 March 2026 01:03:42 +0000 (0:00:10.537) 0:01:08.165 ********** 2026-03-02 01:03:44.413334 | orchestrator | =============================================================================== 2026-03-02 01:03:44.413340 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.54s 2026-03-02 01:03:44.413345 | orchestrator | placement : Restart placement-api container ---------------------------- 10.54s 2026-03-02 01:03:44.413350 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.84s 2026-03-02 01:03:44.413355 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.80s 2026-03-02 01:03:44.413360 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.77s 2026-03-02 01:03:44.413365 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.73s 2026-03-02 01:03:44.413370 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.56s 2026-03-02 01:03:44.413376 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.70s 2026-03-02 01:03:44.413382 | orchestrator | placement : Creating placement databases -------------------------------- 2.68s 2026-03-02 01:03:44.413387 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.62s 2026-03-02 01:03:44.413392 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.24s 2026-03-02 01:03:44.413397 | orchestrator | placement : Copying over config.json files for services ----------------- 1.52s 2026-03-02 01:03:44.413402 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.50s 2026-03-02 01:03:44.413406 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.33s 2026-03-02 01:03:44.413411 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.24s 2026-03-02 01:03:44.413417 | orchestrator | placement : Check placement containers ---------------------------------- 0.96s 2026-03-02 01:03:44.413422 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.87s 2026-03-02 01:03:44.413427 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.78s 2026-03-02 01:03:44.413432 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.75s 2026-03-02 01:03:44.413437 | orchestrator | placement : include_tasks ----------------------------------------------- 0.52s 2026-03-02 01:03:44.413847 | orchestrator | 2026-03-02 01:03:44 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:44.414504 | orchestrator | 2026-03-02 01:03:44 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:44.415309 | orchestrator | 2026-03-02 01:03:44 | INFO  | Task 6ca7147d-e0db-42e5-9dc0-af397d8a2647 is in state STARTED 2026-03-02 01:03:44.417751 | orchestrator | 2026-03-02 01:03:44 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:44.417795 | orchestrator | 2026-03-02 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:47.445120 | orchestrator | 2026-03-02 01:03:47 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:47.445383 | orchestrator | 2026-03-02 01:03:47 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:47.446034 | orchestrator | 2026-03-02 01:03:47 | INFO  | Task 6ca7147d-e0db-42e5-9dc0-af397d8a2647 is in state STARTED 2026-03-02 01:03:47.446445 | orchestrator | 2026-03-02 01:03:47 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:47.446460 | orchestrator | 2026-03-02 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:50.475223 | orchestrator | 2026-03-02 01:03:50 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:50.477583 | orchestrator | 2026-03-02 01:03:50 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:50.479316 | orchestrator | 2026-03-02 01:03:50 | INFO  | Task 6ca7147d-e0db-42e5-9dc0-af397d8a2647 is in state SUCCESS 2026-03-02 01:03:50.481493 | orchestrator | 2026-03-02 01:03:50 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:50.485639 | orchestrator | 2026-03-02 01:03:50 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:03:50.486734 | orchestrator | 2026-03-02 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:53.528195 | orchestrator | 2026-03-02 01:03:53 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:53.530129 | orchestrator | 2026-03-02 01:03:53 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:53.531682 | orchestrator | 2026-03-02 01:03:53 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:53.533690 | orchestrator | 2026-03-02 01:03:53 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:03:53.533728 | orchestrator | 2026-03-02 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:56.570556 | orchestrator | 2026-03-02 01:03:56 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:56.572918 | orchestrator | 2026-03-02 01:03:56 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:56.573545 | orchestrator | 2026-03-02 01:03:56 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:56.574314 | orchestrator | 2026-03-02 01:03:56 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:03:56.574429 | orchestrator | 2026-03-02 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:03:59.603733 | orchestrator | 2026-03-02 01:03:59 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:03:59.604164 | orchestrator | 2026-03-02 01:03:59 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:03:59.605317 | orchestrator | 2026-03-02 01:03:59 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:03:59.606625 | orchestrator | 2026-03-02 01:03:59 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:03:59.606656 | orchestrator | 2026-03-02 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:02.640503 | orchestrator | 2026-03-02 01:04:02 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:04:02.640558 | orchestrator | 2026-03-02 01:04:02 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:02.641283 | orchestrator | 2026-03-02 01:04:02 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:02.642172 | orchestrator | 2026-03-02 01:04:02 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:02.642212 | orchestrator | 2026-03-02 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:05.678264 | orchestrator | 2026-03-02 01:04:05 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:04:05.679050 | orchestrator | 2026-03-02 01:04:05 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:05.680035 | orchestrator | 2026-03-02 01:04:05 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:05.681206 | orchestrator | 2026-03-02 01:04:05 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:05.681230 | orchestrator | 2026-03-02 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:08.716574 | orchestrator | 2026-03-02 01:04:08 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:04:08.716635 | orchestrator | 2026-03-02 01:04:08 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:08.717399 | orchestrator | 2026-03-02 01:04:08 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:08.719771 | orchestrator | 2026-03-02 01:04:08 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:08.719826 | orchestrator | 2026-03-02 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:11.768284 | orchestrator | 2026-03-02 01:04:11 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:04:11.768336 | orchestrator | 2026-03-02 01:04:11 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:11.768341 | orchestrator | 2026-03-02 01:04:11 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:11.768344 | orchestrator | 2026-03-02 01:04:11 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:11.768347 | orchestrator | 2026-03-02 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:14.800329 | orchestrator | 2026-03-02 01:04:14 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:04:14.800708 | orchestrator | 2026-03-02 01:04:14 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:14.801515 | orchestrator | 2026-03-02 01:04:14 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:14.802147 | orchestrator | 2026-03-02 01:04:14 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:14.802174 | orchestrator | 2026-03-02 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:17.871257 | orchestrator | 2026-03-02 01:04:17 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:04:17.871305 | orchestrator | 2026-03-02 01:04:17 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:17.871310 | orchestrator | 2026-03-02 01:04:17 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:17.871313 | orchestrator | 2026-03-02 01:04:17 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:17.871317 | orchestrator | 2026-03-02 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:20.868632 | orchestrator | 2026-03-02 01:04:20 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:04:20.870983 | orchestrator | 2026-03-02 01:04:20 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:20.983099 | orchestrator | 2026-03-02 01:04:20 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:20.983152 | orchestrator | 2026-03-02 01:04:20 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:20.983163 | orchestrator | 2026-03-02 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:23.900690 | orchestrator | 2026-03-02 01:04:23 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:04:23.901359 | orchestrator | 2026-03-02 01:04:23 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:23.901860 | orchestrator | 2026-03-02 01:04:23 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:23.903371 | orchestrator | 2026-03-02 01:04:23 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:23.903408 | orchestrator | 2026-03-02 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:26.926490 | orchestrator | 2026-03-02 01:04:26 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state STARTED 2026-03-02 01:04:26.926792 | orchestrator | 2026-03-02 01:04:26 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:26.927342 | orchestrator | 2026-03-02 01:04:26 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:26.927910 | orchestrator | 2026-03-02 01:04:26 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:26.927932 | orchestrator | 2026-03-02 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:29.977774 | orchestrator | 2026-03-02 01:04:29 | INFO  | Task c32e041f-e683-4443-a844-fde29b682da6 is in state SUCCESS 2026-03-02 01:04:29.979282 | orchestrator | 2026-03-02 01:04:29.979328 | orchestrator | 2026-03-02 01:04:29.979334 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:04:29.979338 | orchestrator | 2026-03-02 01:04:29.979342 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:04:29.979346 | orchestrator | Monday 02 March 2026 01:03:47 +0000 (0:00:00.146) 0:00:00.146 ********** 2026-03-02 01:04:29.979351 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:04:29.979355 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:04:29.979359 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:04:29.979362 | orchestrator | 2026-03-02 01:04:29.979367 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:04:29.979371 | orchestrator | Monday 02 March 2026 01:03:48 +0000 (0:00:00.219) 0:00:00.365 ********** 2026-03-02 01:04:29.979375 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-02 01:04:29.979378 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-02 01:04:29.979382 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-02 01:04:29.979386 | orchestrator | 2026-03-02 01:04:29.979390 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-02 01:04:29.979393 | orchestrator | 2026-03-02 01:04:29.979397 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-02 01:04:29.979401 | orchestrator | Monday 02 March 2026 01:03:48 +0000 (0:00:00.580) 0:00:00.946 ********** 2026-03-02 01:04:29.979405 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:04:29.979410 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:04:29.979417 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:04:29.979423 | orchestrator | 2026-03-02 01:04:29.979429 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:04:29.979435 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:04:29.979442 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:04:29.979448 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:04:29.979454 | orchestrator | 2026-03-02 01:04:29.979460 | orchestrator | 2026-03-02 01:04:29.979467 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:04:29.979473 | orchestrator | Monday 02 March 2026 01:03:49 +0000 (0:00:00.630) 0:00:01.577 ********** 2026-03-02 01:04:29.979479 | orchestrator | =============================================================================== 2026-03-02 01:04:29.979486 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.63s 2026-03-02 01:04:29.979493 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2026-03-02 01:04:29.979510 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.22s 2026-03-02 01:04:29.979514 | orchestrator | 2026-03-02 01:04:29.979518 | orchestrator | 2026-03-02 01:04:29.979522 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:04:29.979526 | orchestrator | 2026-03-02 01:04:29.979529 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:04:29.979533 | orchestrator | Monday 02 March 2026 01:00:25 +0000 (0:00:00.332) 0:00:00.332 ********** 2026-03-02 01:04:29.979537 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:04:29.979541 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:04:29.979544 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:04:29.979548 | orchestrator | ok: [testbed-node-3] 2026-03-02 01:04:29.979552 | orchestrator | ok: [testbed-node-4] 2026-03-02 01:04:29.979555 | orchestrator | ok: [testbed-node-5] 2026-03-02 01:04:29.979560 | orchestrator | 2026-03-02 01:04:29.979566 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:04:29.979572 | orchestrator | Monday 02 March 2026 01:00:26 +0000 (0:00:00.963) 0:00:01.296 ********** 2026-03-02 01:04:29.979578 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-02 01:04:29.979585 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-02 01:04:29.979591 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-02 01:04:29.979595 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-02 01:04:29.979599 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-02 01:04:29.979603 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-02 01:04:29.979606 | orchestrator | 2026-03-02 01:04:29.979610 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-02 01:04:29.979614 | orchestrator | 2026-03-02 01:04:29.979673 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-02 01:04:29.979681 | orchestrator | Monday 02 March 2026 01:00:26 +0000 (0:00:00.749) 0:00:02.046 ********** 2026-03-02 01:04:29.979687 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 01:04:29.979694 | orchestrator | 2026-03-02 01:04:29.979703 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-02 01:04:29.979746 | orchestrator | Monday 02 March 2026 01:00:27 +0000 (0:00:01.009) 0:00:03.056 ********** 2026-03-02 01:04:29.979754 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:04:29.979760 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:04:29.979767 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:04:29.979773 | orchestrator | ok: [testbed-node-3] 2026-03-02 01:04:29.979780 | orchestrator | ok: [testbed-node-4] 2026-03-02 01:04:29.979786 | orchestrator | ok: [testbed-node-5] 2026-03-02 01:04:29.979792 | orchestrator | 2026-03-02 01:04:29.979798 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-02 01:04:29.980060 | orchestrator | Monday 02 March 2026 01:00:29 +0000 (0:00:01.160) 0:00:04.216 ********** 2026-03-02 01:04:29.980069 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:04:29.980076 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:04:29.980082 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:04:29.980087 | orchestrator | ok: [testbed-node-3] 2026-03-02 01:04:29.980091 | orchestrator | ok: [testbed-node-4] 2026-03-02 01:04:29.980103 | orchestrator | ok: [testbed-node-5] 2026-03-02 01:04:29.980108 | orchestrator | 2026-03-02 01:04:29.980150 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-02 01:04:29.980154 | orchestrator | Monday 02 March 2026 01:00:30 +0000 (0:00:00.994) 0:00:05.211 ********** 2026-03-02 01:04:29.980158 | orchestrator | ok: [testbed-node-0] => { 2026-03-02 01:04:29.980165 | orchestrator |  "changed": false, 2026-03-02 01:04:29.980170 | orchestrator |  "msg": "All assertions passed" 2026-03-02 01:04:29.980174 | orchestrator | } 2026-03-02 01:04:29.980178 | orchestrator | ok: [testbed-node-1] => { 2026-03-02 01:04:29.980188 | orchestrator |  "changed": false, 2026-03-02 01:04:29.980192 | orchestrator |  "msg": "All assertions passed" 2026-03-02 01:04:29.980196 | orchestrator | } 2026-03-02 01:04:29.980199 | orchestrator | ok: [testbed-node-2] => { 2026-03-02 01:04:29.980230 | orchestrator |  "changed": false, 2026-03-02 01:04:29.980236 | orchestrator |  "msg": "All assertions passed" 2026-03-02 01:04:29.980240 | orchestrator | } 2026-03-02 01:04:29.980244 | orchestrator | ok: [testbed-node-3] => { 2026-03-02 01:04:29.980251 | orchestrator |  "changed": false, 2026-03-02 01:04:29.980255 | orchestrator |  "msg": "All assertions passed" 2026-03-02 01:04:29.980259 | orchestrator | } 2026-03-02 01:04:29.980264 | orchestrator | ok: [testbed-node-4] => { 2026-03-02 01:04:29.980378 | orchestrator |  "changed": false, 2026-03-02 01:04:29.980384 | orchestrator |  "msg": "All assertions passed" 2026-03-02 01:04:29.980388 | orchestrator | } 2026-03-02 01:04:29.980392 | orchestrator | ok: [testbed-node-5] => { 2026-03-02 01:04:29.980395 | orchestrator |  "changed": false, 2026-03-02 01:04:29.980399 | orchestrator |  "msg": "All assertions passed" 2026-03-02 01:04:29.980403 | orchestrator | } 2026-03-02 01:04:29.980407 | orchestrator | 2026-03-02 01:04:29.980410 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-02 01:04:29.980414 | orchestrator | Monday 02 March 2026 01:00:30 +0000 (0:00:00.657) 0:00:05.869 ********** 2026-03-02 01:04:29.980418 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.980421 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.980425 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.980429 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.980432 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.980436 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.980440 | orchestrator | 2026-03-02 01:04:29.980443 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-02 01:04:29.980447 | orchestrator | Monday 02 March 2026 01:00:31 +0000 (0:00:00.447) 0:00:06.317 ********** 2026-03-02 01:04:29.980451 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-02 01:04:29.980455 | orchestrator | 2026-03-02 01:04:29.980458 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-02 01:04:29.980462 | orchestrator | Monday 02 March 2026 01:00:34 +0000 (0:00:03.417) 0:00:09.735 ********** 2026-03-02 01:04:29.980466 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-02 01:04:29.980470 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-02 01:04:29.980474 | orchestrator | 2026-03-02 01:04:29.980477 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-02 01:04:29.980481 | orchestrator | Monday 02 March 2026 01:00:40 +0000 (0:00:06.199) 0:00:15.934 ********** 2026-03-02 01:04:29.980485 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-02 01:04:29.980489 | orchestrator | 2026-03-02 01:04:29.980492 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-02 01:04:29.980496 | orchestrator | Monday 02 March 2026 01:00:44 +0000 (0:00:03.765) 0:00:19.700 ********** 2026-03-02 01:04:29.980500 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-02 01:04:29.980503 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-02 01:04:29.980507 | orchestrator | 2026-03-02 01:04:29.980511 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-02 01:04:29.980515 | orchestrator | Monday 02 March 2026 01:00:48 +0000 (0:00:04.289) 0:00:23.990 ********** 2026-03-02 01:04:29.980518 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-02 01:04:29.980522 | orchestrator | 2026-03-02 01:04:29.980526 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-02 01:04:29.980529 | orchestrator | Monday 02 March 2026 01:00:52 +0000 (0:00:03.443) 0:00:27.433 ********** 2026-03-02 01:04:29.980533 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-02 01:04:29.980541 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-02 01:04:29.980544 | orchestrator | 2026-03-02 01:04:29.980548 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-02 01:04:29.980553 | orchestrator | Monday 02 March 2026 01:00:59 +0000 (0:00:07.093) 0:00:34.526 ********** 2026-03-02 01:04:29.980559 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.980565 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.980572 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.980578 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.980584 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.980590 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.980596 | orchestrator | 2026-03-02 01:04:29.980602 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-02 01:04:29.980609 | orchestrator | Monday 02 March 2026 01:00:59 +0000 (0:00:00.627) 0:00:35.154 ********** 2026-03-02 01:04:29.980615 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.980621 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.980627 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.980634 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.980641 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.980645 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.980649 | orchestrator | 2026-03-02 01:04:29.980653 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-02 01:04:29.980656 | orchestrator | Monday 02 March 2026 01:01:01 +0000 (0:00:01.728) 0:00:36.882 ********** 2026-03-02 01:04:29.980660 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:04:29.980664 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:04:29.980668 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:04:29.980671 | orchestrator | ok: [testbed-node-3] 2026-03-02 01:04:29.980675 | orchestrator | ok: [testbed-node-4] 2026-03-02 01:04:29.980702 | orchestrator | ok: [testbed-node-5] 2026-03-02 01:04:29.980709 | orchestrator | 2026-03-02 01:04:29.980715 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-02 01:04:29.980722 | orchestrator | Monday 02 March 2026 01:01:02 +0000 (0:00:01.081) 0:00:37.963 ********** 2026-03-02 01:04:29.980728 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.980735 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.980741 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.980746 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.980750 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.980753 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.980757 | orchestrator | 2026-03-02 01:04:29.980761 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-02 01:04:29.980765 | orchestrator | Monday 02 March 2026 01:01:05 +0000 (0:00:02.339) 0:00:40.303 ********** 2026-03-02 01:04:29.980770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.980776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.980785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.980790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.980869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.980880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.980887 | orchestrator | 2026-03-02 01:04:29.980894 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-02 01:04:29.980901 | orchestrator | Monday 02 March 2026 01:01:07 +0000 (0:00:02.549) 0:00:42.852 ********** 2026-03-02 01:04:29.980913 | orchestrator | [WARNING]: Skipped 2026-03-02 01:04:29.980920 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-02 01:04:29.980927 | orchestrator | due to this access issue: 2026-03-02 01:04:29.980934 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-02 01:04:29.980941 | orchestrator | a directory 2026-03-02 01:04:29.980947 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 01:04:29.980955 | orchestrator | 2026-03-02 01:04:29.980961 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-02 01:04:29.980968 | orchestrator | Monday 02 March 2026 01:01:08 +0000 (0:00:00.817) 0:00:43.670 ********** 2026-03-02 01:04:29.980975 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 01:04:29.980980 | orchestrator | 2026-03-02 01:04:29.980983 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-02 01:04:29.981009 | orchestrator | Monday 02 March 2026 01:01:09 +0000 (0:00:01.156) 0:00:44.826 ********** 2026-03-02 01:04:29.981016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.981057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981066 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.981071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.981076 | orchestrator | 2026-03-02 01:04:29.981080 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-02 01:04:29.981085 | orchestrator | Monday 02 March 2026 01:01:13 +0000 (0:00:03.827) 0:00:48.653 ********** 2026-03-02 01:04:29.981101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981106 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.981111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981119 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.981124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981129 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.981133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981138 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981147 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.981163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981169 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.981173 | orchestrator | 2026-03-02 01:04:29.981178 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-02 01:04:29.981182 | orchestrator | Monday 02 March 2026 01:01:16 +0000 (0:00:02.877) 0:00:51.530 ********** 2026-03-02 01:04:29.981190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981195 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981204 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.981209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981213 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.981218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981223 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.981230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981238 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.981243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981248 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.981252 | orchestrator | 2026-03-02 01:04:29.981256 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-02 01:04:29.981261 | orchestrator | Monday 02 March 2026 01:01:19 +0000 (0:00:03.246) 0:00:54.776 ********** 2026-03-02 01:04:29.981265 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.981270 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.981274 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.981278 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981283 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.981287 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.981292 | orchestrator | 2026-03-02 01:04:29.981296 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-02 01:04:29.981301 | orchestrator | Monday 02 March 2026 01:01:22 +0000 (0:00:02.673) 0:00:57.450 ********** 2026-03-02 01:04:29.981305 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.981310 | orchestrator | 2026-03-02 01:04:29.981314 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-02 01:04:29.981319 | orchestrator | Monday 02 March 2026 01:01:22 +0000 (0:00:00.152) 0:00:57.603 ********** 2026-03-02 01:04:29.981323 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.981328 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.981332 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.981336 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981340 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.981345 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.981349 | orchestrator | 2026-03-02 01:04:29.981354 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-02 01:04:29.981359 | orchestrator | Monday 02 March 2026 01:01:23 +0000 (0:00:00.626) 0:00:58.230 ********** 2026-03-02 01:04:29.981364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981371 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.981381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981385 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.981398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981406 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981414 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.981418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981422 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.981426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981433 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.981437 | orchestrator | 2026-03-02 01:04:29.981440 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-02 01:04:29.981444 | orchestrator | Monday 02 March 2026 01:01:25 +0000 (0:00:02.504) 0:01:00.734 ********** 2026-03-02 01:04:29.981451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981464 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.981470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.981481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.981488 | orchestrator | 2026-03-02 01:04:29.981498 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-02 01:04:29.981504 | orchestrator | Monday 02 March 2026 01:01:29 +0000 (0:00:03.796) 0:01:04.531 ********** 2026-03-02 01:04:29.981511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.981525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.981553 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.981559 | orchestrator | 2026-03-02 01:04:29.981566 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-02 01:04:29.981572 | orchestrator | Monday 02 March 2026 01:01:35 +0000 (0:00:06.277) 0:01:10.808 ********** 2026-03-02 01:04:29.981579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981592 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.981598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981604 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.981615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.981621 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.981628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981634 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.981641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981647 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981664 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.981670 | orchestrator | 2026-03-02 01:04:29.981676 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-02 01:04:29.981682 | orchestrator | Monday 02 March 2026 01:01:39 +0000 (0:00:03.490) 0:01:14.299 ********** 2026-03-02 01:04:29.981688 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:04:29.981694 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981700 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:04:29.981707 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.981714 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.981721 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:04:29.981728 | orchestrator | 2026-03-02 01:04:29.981735 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-02 01:04:29.981741 | orchestrator | Monday 02 March 2026 01:01:42 +0000 (0:00:03.522) 0:01:17.821 ********** 2026-03-02 01:04:29.981748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981755 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981774 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.981780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.981787 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.981795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.981832 | orchestrator | 2026-03-02 01:04:29.981836 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-02 01:04:29.981840 | orchestrator | Monday 02 March 2026 01:01:47 +0000 (0:00:04.411) 0:01:22.232 ********** 2026-03-02 01:04:29.981844 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.981847 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.981851 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.981855 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.981859 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.981862 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981866 | orchestrator | 2026-03-02 01:04:29.981870 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-02 01:04:29.981874 | orchestrator | Monday 02 March 2026 01:01:49 +0000 (0:00:02.835) 0:01:25.067 ********** 2026-03-02 01:04:29.981877 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.981881 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.981885 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.981889 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981892 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.981896 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.981900 | orchestrator | 2026-03-02 01:04:29.981904 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-02 01:04:29.981910 | orchestrator | Monday 02 March 2026 01:01:51 +0000 (0:00:02.038) 0:01:27.106 ********** 2026-03-02 01:04:29.981914 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.981917 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.981921 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.981925 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.981928 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981932 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.981936 | orchestrator | 2026-03-02 01:04:29.981940 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-02 01:04:29.981943 | orchestrator | Monday 02 March 2026 01:01:53 +0000 (0:00:01.976) 0:01:29.083 ********** 2026-03-02 01:04:29.981947 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.981951 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.981954 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.981958 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.981962 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982073 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982088 | orchestrator | 2026-03-02 01:04:29.982095 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-02 01:04:29.982102 | orchestrator | Monday 02 March 2026 01:01:55 +0000 (0:00:01.880) 0:01:30.963 ********** 2026-03-02 01:04:29.982109 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982113 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982116 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982120 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982124 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982128 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982132 | orchestrator | 2026-03-02 01:04:29.982135 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-02 01:04:29.982139 | orchestrator | Monday 02 March 2026 01:01:57 +0000 (0:00:01.904) 0:01:32.868 ********** 2026-03-02 01:04:29.982143 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982147 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982151 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982156 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982162 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982172 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982179 | orchestrator | 2026-03-02 01:04:29.982184 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-02 01:04:29.982191 | orchestrator | Monday 02 March 2026 01:01:59 +0000 (0:00:01.910) 0:01:34.778 ********** 2026-03-02 01:04:29.982202 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-02 01:04:29.982209 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982216 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-02 01:04:29.982222 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982229 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-02 01:04:29.982236 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982243 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-02 01:04:29.982248 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982256 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-02 01:04:29.982260 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982264 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-02 01:04:29.982268 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982272 | orchestrator | 2026-03-02 01:04:29.982276 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-02 01:04:29.982284 | orchestrator | Monday 02 March 2026 01:02:01 +0000 (0:00:01.926) 0:01:36.705 ********** 2026-03-02 01:04:29.982298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.982310 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.982323 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.982335 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.982347 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.982366 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.982384 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982390 | orchestrator | 2026-03-02 01:04:29.982397 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-02 01:04:29.982403 | orchestrator | Monday 02 March 2026 01:02:03 +0000 (0:00:02.060) 0:01:38.766 ********** 2026-03-02 01:04:29.982411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.982418 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.982432 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.982450 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.982467 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.982498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.982505 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982512 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982518 | orchestrator | 2026-03-02 01:04:29.982525 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-02 01:04:29.982531 | orchestrator | Monday 02 March 2026 01:02:05 +0000 (0:00:02.016) 0:01:40.782 ********** 2026-03-02 01:04:29.982538 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982544 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982550 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982557 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982563 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982570 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982576 | orchestrator | 2026-03-02 01:04:29.982582 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-02 01:04:29.982588 | orchestrator | Monday 02 March 2026 01:02:07 +0000 (0:00:01.770) 0:01:42.553 ********** 2026-03-02 01:04:29.982595 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982605 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982611 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982617 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:04:29.982623 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:04:29.982629 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:04:29.982636 | orchestrator | 2026-03-02 01:04:29.982643 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-02 01:04:29.982649 | orchestrator | Monday 02 March 2026 01:02:11 +0000 (0:00:03.890) 0:01:46.443 ********** 2026-03-02 01:04:29.982656 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982662 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982668 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982674 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982680 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982687 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982693 | orchestrator | 2026-03-02 01:04:29.982699 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-02 01:04:29.982706 | orchestrator | Monday 02 March 2026 01:02:13 +0000 (0:00:01.713) 0:01:48.156 ********** 2026-03-02 01:04:29.982713 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982719 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982726 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982732 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982739 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982745 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982751 | orchestrator | 2026-03-02 01:04:29.982758 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-02 01:04:29.982765 | orchestrator | Monday 02 March 2026 01:02:15 +0000 (0:00:02.232) 0:01:50.389 ********** 2026-03-02 01:04:29.982771 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982778 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982784 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982790 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982797 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982816 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982823 | orchestrator | 2026-03-02 01:04:29.982829 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-02 01:04:29.982836 | orchestrator | Monday 02 March 2026 01:02:18 +0000 (0:00:03.751) 0:01:54.141 ********** 2026-03-02 01:04:29.982843 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982849 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982855 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982862 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982869 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982879 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982886 | orchestrator | 2026-03-02 01:04:29.982892 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-02 01:04:29.982899 | orchestrator | Monday 02 March 2026 01:02:20 +0000 (0:00:01.771) 0:01:55.913 ********** 2026-03-02 01:04:29.982905 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982912 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982918 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982924 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982931 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982937 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.982943 | orchestrator | 2026-03-02 01:04:29.982949 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-02 01:04:29.982955 | orchestrator | Monday 02 March 2026 01:02:22 +0000 (0:00:02.145) 0:01:58.058 ********** 2026-03-02 01:04:29.982962 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.982968 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.982974 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.982979 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.982989 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.982999 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.983006 | orchestrator | 2026-03-02 01:04:29.983012 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-02 01:04:29.983018 | orchestrator | Monday 02 March 2026 01:02:24 +0000 (0:00:01.798) 0:01:59.856 ********** 2026-03-02 01:04:29.983024 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.983030 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.983037 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.983043 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.983049 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.983056 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.983062 | orchestrator | 2026-03-02 01:04:29.983068 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-02 01:04:29.983075 | orchestrator | Monday 02 March 2026 01:02:26 +0000 (0:00:01.506) 0:02:01.362 ********** 2026-03-02 01:04:29.983081 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-02 01:04:29.983088 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.983094 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-02 01:04:29.983101 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.983107 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-02 01:04:29.983113 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.983119 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-02 01:04:29.983126 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.983132 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-02 01:04:29.983139 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.983145 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-02 01:04:29.983151 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.983157 | orchestrator | 2026-03-02 01:04:29.983164 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-02 01:04:29.983170 | orchestrator | Monday 02 March 2026 01:02:28 +0000 (0:00:02.089) 0:02:03.452 ********** 2026-03-02 01:04:29.983177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.983183 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.983194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.983205 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.983212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.983219 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.983226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-02 01:04:29.983232 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.983239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.983244 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.983248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-02 01:04:29.983252 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.983256 | orchestrator | 2026-03-02 01:04:29.983260 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-02 01:04:29.983266 | orchestrator | Monday 02 March 2026 01:02:30 +0000 (0:00:02.241) 0:02:05.693 ********** 2026-03-02 01:04:29.983274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.983279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.983283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-02 01:04:29.983287 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.983291 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.983301 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-02 01:04:29.983305 | orchestrator | 2026-03-02 01:04:29.983309 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-02 01:04:29.983313 | orchestrator | Monday 02 March 2026 01:02:33 +0000 (0:00:02.629) 0:02:08.322 ********** 2026-03-02 01:04:29.983317 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:04:29.983321 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:04:29.983325 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:04:29.983329 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:04:29.983332 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:04:29.983336 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:04:29.983340 | orchestrator | 2026-03-02 01:04:29.983344 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-02 01:04:29.983348 | orchestrator | Monday 02 March 2026 01:02:33 +0000 (0:00:00.487) 0:02:08.809 ********** 2026-03-02 01:04:29.983351 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:04:29.983355 | orchestrator | 2026-03-02 01:04:29.983359 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-02 01:04:29.983363 | orchestrator | Monday 02 March 2026 01:02:36 +0000 (0:00:02.922) 0:02:11.731 ********** 2026-03-02 01:04:29.983367 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:04:29.983370 | orchestrator | 2026-03-02 01:04:29.983374 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-02 01:04:29.983378 | orchestrator | Monday 02 March 2026 01:02:38 +0000 (0:00:02.026) 0:02:13.758 ********** 2026-03-02 01:04:29.983382 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:04:29.983386 | orchestrator | 2026-03-02 01:04:29.983389 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-02 01:04:29.983393 | orchestrator | Monday 02 March 2026 01:03:18 +0000 (0:00:39.463) 0:02:53.222 ********** 2026-03-02 01:04:29.983397 | orchestrator | 2026-03-02 01:04:29.983401 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-02 01:04:29.983404 | orchestrator | Monday 02 March 2026 01:03:18 +0000 (0:00:00.068) 0:02:53.290 ********** 2026-03-02 01:04:29.983408 | orchestrator | 2026-03-02 01:04:29.983412 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-02 01:04:29.983415 | orchestrator | Monday 02 March 2026 01:03:18 +0000 (0:00:00.255) 0:02:53.545 ********** 2026-03-02 01:04:29.983419 | orchestrator | 2026-03-02 01:04:29.983423 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-02 01:04:29.983427 | orchestrator | Monday 02 March 2026 01:03:18 +0000 (0:00:00.066) 0:02:53.612 ********** 2026-03-02 01:04:29.983431 | orchestrator | 2026-03-02 01:04:29.983434 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-02 01:04:29.983438 | orchestrator | Monday 02 March 2026 01:03:18 +0000 (0:00:00.076) 0:02:53.689 ********** 2026-03-02 01:04:29.983442 | orchestrator | 2026-03-02 01:04:29.983445 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-02 01:04:29.983449 | orchestrator | Monday 02 March 2026 01:03:18 +0000 (0:00:00.068) 0:02:53.757 ********** 2026-03-02 01:04:29.983456 | orchestrator | 2026-03-02 01:04:29.983460 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-02 01:04:29.983463 | orchestrator | Monday 02 March 2026 01:03:18 +0000 (0:00:00.067) 0:02:53.825 ********** 2026-03-02 01:04:29.983467 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:04:29.983471 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:04:29.983475 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:04:29.983479 | orchestrator | 2026-03-02 01:04:29.983482 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-02 01:04:29.983486 | orchestrator | Monday 02 March 2026 01:03:42 +0000 (0:00:24.082) 0:03:17.908 ********** 2026-03-02 01:04:29.983490 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:04:29.983494 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:04:29.983498 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:04:29.983501 | orchestrator | 2026-03-02 01:04:29.983505 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:04:29.983509 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-02 01:04:29.983514 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-02 01:04:29.983518 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-02 01:04:29.983521 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-02 01:04:29.983525 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-02 01:04:29.983529 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-02 01:04:29.983533 | orchestrator | 2026-03-02 01:04:29.983536 | orchestrator | 2026-03-02 01:04:29.983540 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:04:29.983544 | orchestrator | Monday 02 March 2026 01:04:27 +0000 (0:00:44.249) 0:04:02.158 ********** 2026-03-02 01:04:29.983551 | orchestrator | =============================================================================== 2026-03-02 01:04:29.983555 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 44.25s 2026-03-02 01:04:29.983558 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.46s 2026-03-02 01:04:29.983562 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.08s 2026-03-02 01:04:29.983566 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.09s 2026-03-02 01:04:29.983570 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.28s 2026-03-02 01:04:29.983574 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.20s 2026-03-02 01:04:29.983577 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.41s 2026-03-02 01:04:29.983581 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.29s 2026-03-02 01:04:29.983585 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.89s 2026-03-02 01:04:29.983589 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.83s 2026-03-02 01:04:29.983592 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.80s 2026-03-02 01:04:29.983596 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.77s 2026-03-02 01:04:29.983600 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.75s 2026-03-02 01:04:29.983603 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.52s 2026-03-02 01:04:29.983627 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.49s 2026-03-02 01:04:29.983631 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.44s 2026-03-02 01:04:29.983635 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.42s 2026-03-02 01:04:29.983638 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.25s 2026-03-02 01:04:29.983642 | orchestrator | neutron : Creating Neutron database ------------------------------------- 2.92s 2026-03-02 01:04:29.983646 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 2.88s 2026-03-02 01:04:29.983650 | orchestrator | 2026-03-02 01:04:29 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:29.983654 | orchestrator | 2026-03-02 01:04:29 | INFO  | Task a27709c6-809d-4057-8ad5-8d018883a99d is in state STARTED 2026-03-02 01:04:29.986131 | orchestrator | 2026-03-02 01:04:29 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:29.986513 | orchestrator | 2026-03-02 01:04:29 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:29.986741 | orchestrator | 2026-03-02 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:33.040390 | orchestrator | 2026-03-02 01:04:33 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:33.042713 | orchestrator | 2026-03-02 01:04:33 | INFO  | Task a27709c6-809d-4057-8ad5-8d018883a99d is in state STARTED 2026-03-02 01:04:33.045262 | orchestrator | 2026-03-02 01:04:33 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:33.046408 | orchestrator | 2026-03-02 01:04:33 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:33.046755 | orchestrator | 2026-03-02 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:36.096094 | orchestrator | 2026-03-02 01:04:36 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:36.096659 | orchestrator | 2026-03-02 01:04:36 | INFO  | Task a27709c6-809d-4057-8ad5-8d018883a99d is in state STARTED 2026-03-02 01:04:36.100010 | orchestrator | 2026-03-02 01:04:36 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:36.102868 | orchestrator | 2026-03-02 01:04:36 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:36.102916 | orchestrator | 2026-03-02 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:39.218746 | orchestrator | 2026-03-02 01:04:39 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:39.218826 | orchestrator | 2026-03-02 01:04:39 | INFO  | Task a27709c6-809d-4057-8ad5-8d018883a99d is in state STARTED 2026-03-02 01:04:39.218834 | orchestrator | 2026-03-02 01:04:39 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:39.218838 | orchestrator | 2026-03-02 01:04:39 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:39.218842 | orchestrator | 2026-03-02 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:42.169486 | orchestrator | 2026-03-02 01:04:42 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:42.169531 | orchestrator | 2026-03-02 01:04:42 | INFO  | Task a27709c6-809d-4057-8ad5-8d018883a99d is in state STARTED 2026-03-02 01:04:42.171003 | orchestrator | 2026-03-02 01:04:42 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:42.171158 | orchestrator | 2026-03-02 01:04:42 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:42.171223 | orchestrator | 2026-03-02 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:45.211183 | orchestrator | 2026-03-02 01:04:45 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:45.212289 | orchestrator | 2026-03-02 01:04:45 | INFO  | Task a27709c6-809d-4057-8ad5-8d018883a99d is in state STARTED 2026-03-02 01:04:45.213848 | orchestrator | 2026-03-02 01:04:45 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:45.215203 | orchestrator | 2026-03-02 01:04:45 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:45.215257 | orchestrator | 2026-03-02 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:48.263405 | orchestrator | 2026-03-02 01:04:48 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:48.265043 | orchestrator | 2026-03-02 01:04:48 | INFO  | Task a27709c6-809d-4057-8ad5-8d018883a99d is in state STARTED 2026-03-02 01:04:48.267115 | orchestrator | 2026-03-02 01:04:48 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:48.268904 | orchestrator | 2026-03-02 01:04:48 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:48.269208 | orchestrator | 2026-03-02 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:51.307579 | orchestrator | 2026-03-02 01:04:51 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:51.307749 | orchestrator | 2026-03-02 01:04:51 | INFO  | Task a27709c6-809d-4057-8ad5-8d018883a99d is in state STARTED 2026-03-02 01:04:51.308468 | orchestrator | 2026-03-02 01:04:51 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:51.309115 | orchestrator | 2026-03-02 01:04:51 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:51.309162 | orchestrator | 2026-03-02 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:54.349571 | orchestrator | 2026-03-02 01:04:54 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:54.349912 | orchestrator | 2026-03-02 01:04:54 | INFO  | Task a27709c6-809d-4057-8ad5-8d018883a99d is in state STARTED 2026-03-02 01:04:54.351632 | orchestrator | 2026-03-02 01:04:54 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:54.352204 | orchestrator | 2026-03-02 01:04:54 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:54.352386 | orchestrator | 2026-03-02 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:04:57.377535 | orchestrator | 2026-03-02 01:04:57 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:04:57.378539 | orchestrator | 2026-03-02 01:04:57 | INFO  | Task a27709c6-809d-4057-8ad5-8d018883a99d is in state STARTED 2026-03-02 01:04:57.379810 | orchestrator | 2026-03-02 01:04:57 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:04:57.380483 | orchestrator | 2026-03-02 01:04:57 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:04:57.380555 | orchestrator | 2026-03-02 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:00.404235 | orchestrator | 2026-03-02 01:05:00 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:00.404553 | orchestrator | 2026-03-02 01:05:00 | INFO  | Task a27709c6-809d-4057-8ad5-8d018883a99d is in state SUCCESS 2026-03-02 01:05:00.405255 | orchestrator | 2026-03-02 01:05:00 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:05:00.405792 | orchestrator | 2026-03-02 01:05:00 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:00.407812 | orchestrator | 2026-03-02 01:05:00 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:00.407852 | orchestrator | 2026-03-02 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:03.441388 | orchestrator | 2026-03-02 01:05:03 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:03.442708 | orchestrator | 2026-03-02 01:05:03 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state STARTED 2026-03-02 01:05:03.444400 | orchestrator | 2026-03-02 01:05:03 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:03.445931 | orchestrator | 2026-03-02 01:05:03 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:03.446000 | orchestrator | 2026-03-02 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:06.494120 | orchestrator | 2026-03-02 01:05:06.494175 | orchestrator | 2026-03-02 01:05:06.494183 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:05:06.494189 | orchestrator | 2026-03-02 01:05:06.494196 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:05:06.494202 | orchestrator | Monday 02 March 2026 01:04:30 +0000 (0:00:00.205) 0:00:00.205 ********** 2026-03-02 01:05:06.494209 | orchestrator | ok: [testbed-manager] 2026-03-02 01:05:06.494215 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:05:06.494222 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:05:06.494228 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:05:06.494234 | orchestrator | ok: [testbed-node-3] 2026-03-02 01:05:06.494240 | orchestrator | ok: [testbed-node-4] 2026-03-02 01:05:06.494246 | orchestrator | ok: [testbed-node-5] 2026-03-02 01:05:06.494252 | orchestrator | 2026-03-02 01:05:06.494278 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:05:06.494286 | orchestrator | Monday 02 March 2026 01:04:31 +0000 (0:00:00.654) 0:00:00.860 ********** 2026-03-02 01:05:06.494292 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-02 01:05:06.494299 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-02 01:05:06.494305 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-02 01:05:06.494311 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-02 01:05:06.494318 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-02 01:05:06.494324 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-02 01:05:06.494331 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-02 01:05:06.494337 | orchestrator | 2026-03-02 01:05:06.494343 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-02 01:05:06.494349 | orchestrator | 2026-03-02 01:05:06.494356 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-02 01:05:06.494362 | orchestrator | Monday 02 March 2026 01:04:32 +0000 (0:00:00.535) 0:00:01.395 ********** 2026-03-02 01:05:06.494369 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 01:05:06.494376 | orchestrator | 2026-03-02 01:05:06.494383 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-02 01:05:06.494389 | orchestrator | Monday 02 March 2026 01:04:33 +0000 (0:00:01.211) 0:00:02.607 ********** 2026-03-02 01:05:06.494395 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-02 01:05:06.494419 | orchestrator | 2026-03-02 01:05:06.494425 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-02 01:05:06.494455 | orchestrator | Monday 02 March 2026 01:04:36 +0000 (0:00:02.962) 0:00:05.569 ********** 2026-03-02 01:05:06.494462 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-02 01:05:06.494586 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-02 01:05:06.494593 | orchestrator | 2026-03-02 01:05:06.494599 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-02 01:05:06.494606 | orchestrator | Monday 02 March 2026 01:04:42 +0000 (0:00:05.979) 0:00:11.549 ********** 2026-03-02 01:05:06.494612 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-02 01:05:06.494618 | orchestrator | 2026-03-02 01:05:06.494624 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-02 01:05:06.494630 | orchestrator | Monday 02 March 2026 01:04:45 +0000 (0:00:02.863) 0:00:14.413 ********** 2026-03-02 01:05:06.494636 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-02 01:05:06.494642 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-02 01:05:06.494648 | orchestrator | 2026-03-02 01:05:06.494655 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-02 01:05:06.494661 | orchestrator | Monday 02 March 2026 01:04:48 +0000 (0:00:03.388) 0:00:17.802 ********** 2026-03-02 01:05:06.494667 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-02 01:05:06.494673 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-02 01:05:06.494679 | orchestrator | 2026-03-02 01:05:06.494685 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-02 01:05:06.494691 | orchestrator | Monday 02 March 2026 01:04:53 +0000 (0:00:05.187) 0:00:22.989 ********** 2026-03-02 01:05:06.494697 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-02 01:05:06.494703 | orchestrator | 2026-03-02 01:05:06.494709 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:05:06.494716 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:05:06.494722 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:05:06.494787 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:05:06.494794 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:05:06.494800 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:05:06.494817 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:05:06.494824 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:05:06.494830 | orchestrator | 2026-03-02 01:05:06.494836 | orchestrator | 2026-03-02 01:05:06.494842 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:05:06.494848 | orchestrator | Monday 02 March 2026 01:04:58 +0000 (0:00:04.407) 0:00:27.397 ********** 2026-03-02 01:05:06.494855 | orchestrator | =============================================================================== 2026-03-02 01:05:06.494861 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.98s 2026-03-02 01:05:06.494867 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.19s 2026-03-02 01:05:06.494873 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.41s 2026-03-02 01:05:06.494887 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.39s 2026-03-02 01:05:06.494893 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 2.96s 2026-03-02 01:05:06.494900 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.86s 2026-03-02 01:05:06.494906 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.21s 2026-03-02 01:05:06.494912 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2026-03-02 01:05:06.494919 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-03-02 01:05:06.494925 | orchestrator | 2026-03-02 01:05:06.494932 | orchestrator | 2026-03-02 01:05:06.494939 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:05:06.494944 | orchestrator | 2026-03-02 01:05:06.494948 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:05:06.494952 | orchestrator | Monday 02 March 2026 01:03:20 +0000 (0:00:00.271) 0:00:00.271 ********** 2026-03-02 01:05:06.494955 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:05:06.494959 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:05:06.494963 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:05:06.494967 | orchestrator | 2026-03-02 01:05:06.494970 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:05:06.494974 | orchestrator | Monday 02 March 2026 01:03:20 +0000 (0:00:00.352) 0:00:00.623 ********** 2026-03-02 01:05:06.494978 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-02 01:05:06.494982 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-02 01:05:06.494986 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-02 01:05:06.494989 | orchestrator | 2026-03-02 01:05:06.494993 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-02 01:05:06.494997 | orchestrator | 2026-03-02 01:05:06.495000 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-02 01:05:06.495004 | orchestrator | Monday 02 March 2026 01:03:21 +0000 (0:00:00.448) 0:00:01.072 ********** 2026-03-02 01:05:06.495008 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:05:06.495012 | orchestrator | 2026-03-02 01:05:06.495015 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-02 01:05:06.495019 | orchestrator | Monday 02 March 2026 01:03:21 +0000 (0:00:00.513) 0:00:01.585 ********** 2026-03-02 01:05:06.495023 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-02 01:05:06.495027 | orchestrator | 2026-03-02 01:05:06.495030 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-02 01:05:06.495034 | orchestrator | Monday 02 March 2026 01:03:25 +0000 (0:00:04.120) 0:00:05.706 ********** 2026-03-02 01:05:06.495038 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-02 01:05:06.495042 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-02 01:05:06.495045 | orchestrator | 2026-03-02 01:05:06.495049 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-02 01:05:06.495053 | orchestrator | Monday 02 March 2026 01:03:32 +0000 (0:00:06.469) 0:00:12.176 ********** 2026-03-02 01:05:06.495057 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-02 01:05:06.495060 | orchestrator | 2026-03-02 01:05:06.495064 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-02 01:05:06.495068 | orchestrator | Monday 02 March 2026 01:03:35 +0000 (0:00:02.883) 0:00:15.060 ********** 2026-03-02 01:05:06.495072 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-02 01:05:06.495075 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-02 01:05:06.495079 | orchestrator | 2026-03-02 01:05:06.495083 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-02 01:05:06.495090 | orchestrator | Monday 02 March 2026 01:03:38 +0000 (0:00:03.244) 0:00:18.304 ********** 2026-03-02 01:05:06.495093 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-02 01:05:06.495097 | orchestrator | 2026-03-02 01:05:06.495101 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-02 01:05:06.495105 | orchestrator | Monday 02 March 2026 01:03:41 +0000 (0:00:03.055) 0:00:21.359 ********** 2026-03-02 01:05:06.495108 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-02 01:05:06.495112 | orchestrator | 2026-03-02 01:05:06.495116 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-02 01:05:06.495119 | orchestrator | Monday 02 March 2026 01:03:44 +0000 (0:00:03.632) 0:00:24.992 ********** 2026-03-02 01:05:06.495123 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:05:06.495127 | orchestrator | 2026-03-02 01:05:06.495131 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-02 01:05:06.495142 | orchestrator | Monday 02 March 2026 01:03:48 +0000 (0:00:03.402) 0:00:28.394 ********** 2026-03-02 01:05:06.495149 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:05:06.495155 | orchestrator | 2026-03-02 01:05:06.495161 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-02 01:05:06.495167 | orchestrator | Monday 02 March 2026 01:03:52 +0000 (0:00:03.804) 0:00:32.198 ********** 2026-03-02 01:05:06.495173 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:05:06.495179 | orchestrator | 2026-03-02 01:05:06.495185 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-02 01:05:06.495192 | orchestrator | Monday 02 March 2026 01:03:55 +0000 (0:00:02.988) 0:00:35.187 ********** 2026-03-02 01:05:06.495200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495253 | orchestrator | 2026-03-02 01:05:06.495260 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-02 01:05:06.495266 | orchestrator | Monday 02 March 2026 01:03:56 +0000 (0:00:01.414) 0:00:36.602 ********** 2026-03-02 01:05:06.495272 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:05:06.495279 | orchestrator | 2026-03-02 01:05:06.495285 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-02 01:05:06.495291 | orchestrator | Monday 02 March 2026 01:03:56 +0000 (0:00:00.220) 0:00:36.823 ********** 2026-03-02 01:05:06.495297 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:05:06.495303 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:05:06.495309 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:05:06.495316 | orchestrator | 2026-03-02 01:05:06.495322 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-02 01:05:06.495328 | orchestrator | Monday 02 March 2026 01:03:57 +0000 (0:00:00.735) 0:00:37.558 ********** 2026-03-02 01:05:06.495334 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 01:05:06.495340 | orchestrator | 2026-03-02 01:05:06.495346 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-02 01:05:06.495353 | orchestrator | Monday 02 March 2026 01:03:58 +0000 (0:00:00.927) 0:00:38.485 ********** 2026-03-02 01:05:06.495364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495412 | orchestrator | 2026-03-02 01:05:06.495418 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-02 01:05:06.495424 | orchestrator | Monday 02 March 2026 01:04:00 +0000 (0:00:01.966) 0:00:40.452 ********** 2026-03-02 01:05:06.495431 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:05:06.495437 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:05:06.495443 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:05:06.495449 | orchestrator | 2026-03-02 01:05:06.495456 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-02 01:05:06.495462 | orchestrator | Monday 02 March 2026 01:04:00 +0000 (0:00:00.295) 0:00:40.747 ********** 2026-03-02 01:05:06.495468 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:05:06.495474 | orchestrator | 2026-03-02 01:05:06.495480 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-02 01:05:06.495486 | orchestrator | Monday 02 March 2026 01:04:01 +0000 (0:00:00.968) 0:00:41.716 ********** 2026-03-02 01:05:06.495496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495546 | orchestrator | 2026-03-02 01:05:06.495552 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-02 01:05:06.495559 | orchestrator | Monday 02 March 2026 01:04:04 +0000 (0:00:02.307) 0:00:44.024 ********** 2026-03-02 01:05:06.495565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-02 01:05:06.495575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:05:06.495582 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:05:06.495589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-02 01:05:06.495595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:05:06.495602 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:05:06.495613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-02 01:05:06.495619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:05:06.495629 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:05:06.495636 | orchestrator | 2026-03-02 01:05:06.495642 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-02 01:05:06.495648 | orchestrator | Monday 02 March 2026 01:04:04 +0000 (0:00:00.779) 0:00:44.804 ********** 2026-03-02 01:05:06.495654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-02 01:05:06.495661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:05:06.495668 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:05:06.495679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': 2026-03-02 01:05:06 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:06.495687 | orchestrator | 2026-03-02 01:05:06 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:06.495693 | orchestrator | 2026-03-02 01:05:06 | INFO  | Task 2c62a2b2-b832-4eb6-98ee-ffca433a2006 is in state SUCCESS 2026-03-02 01:05:06.495699 | orchestrator | 2026-03-02 01:05:06 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:06.495706 | orchestrator | 2026-03-02 01:05:06 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:06.495712 | orchestrator | 2026-03-02 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:06.495722 | orchestrator | '9511'}}}})  2026-03-02 01:05:06.495740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:05:06.495746 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:05:06.495753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-02 01:05:06.495759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:05:06.495765 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:05:06.495772 | orchestrator | 2026-03-02 01:05:06.495778 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-02 01:05:06.495784 | orchestrator | Monday 02 March 2026 01:04:06 +0000 (0:00:01.441) 0:00:46.246 ********** 2026-03-02 01:05:06.495794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495840 | orchestrator | 2026-03-02 01:05:06.495846 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-02 01:05:06.495853 | orchestrator | Monday 02 March 2026 01:04:08 +0000 (0:00:02.588) 0:00:48.834 ********** 2026-03-02 01:05:06.495862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.495883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.495911 | orchestrator | 2026-03-02 01:05:06.495918 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-02 01:05:06.495924 | orchestrator | Monday 02 March 2026 01:04:14 +0000 (0:00:05.436) 0:00:54.270 ********** 2026-03-02 01:05:06.495930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-02 01:05:06.495937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:05:06.495944 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:05:06.495950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-02 01:05:06.495960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:05:06.495974 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:05:06.495981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-02 01:05:06.495988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:05:06.495994 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:05:06.496000 | orchestrator | 2026-03-02 01:05:06.496007 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-02 01:05:06.496013 | orchestrator | Monday 02 March 2026 01:04:14 +0000 (0:00:00.601) 0:00:54.871 ********** 2026-03-02 01:05:06.496019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.496026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.496042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-02 01:05:06.496048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.496055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.496061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:05:06.496068 | orchestrator | 2026-03-02 01:05:06.496074 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-02 01:05:06.496080 | orchestrator | Monday 02 March 2026 01:04:16 +0000 (0:00:01.997) 0:00:56.868 ********** 2026-03-02 01:05:06.496087 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:05:06.496093 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:05:06.496103 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:05:06.496109 | orchestrator | 2026-03-02 01:05:06.496115 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-02 01:05:06.496121 | orchestrator | Monday 02 March 2026 01:04:17 +0000 (0:00:00.268) 0:00:57.137 ********** 2026-03-02 01:05:06.496128 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:05:06.496143 | orchestrator | 2026-03-02 01:05:06.496149 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-02 01:05:06.496161 | orchestrator | Monday 02 March 2026 01:04:19 +0000 (0:00:01.905) 0:00:59.042 ********** 2026-03-02 01:05:06.496167 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:05:06.496174 | orchestrator | 2026-03-02 01:05:06.496181 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-02 01:05:06.496187 | orchestrator | Monday 02 March 2026 01:04:21 +0000 (0:00:02.123) 0:01:01.166 ********** 2026-03-02 01:05:06.496194 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:05:06.496201 | orchestrator | 2026-03-02 01:05:06.496207 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-02 01:05:06.496214 | orchestrator | Monday 02 March 2026 01:04:37 +0000 (0:00:16.116) 0:01:17.283 ********** 2026-03-02 01:05:06.496221 | orchestrator | 2026-03-02 01:05:06.496232 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-02 01:05:06.496240 | orchestrator | Monday 02 March 2026 01:04:37 +0000 (0:00:00.058) 0:01:17.342 ********** 2026-03-02 01:05:06.496246 | orchestrator | 2026-03-02 01:05:06.496254 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-02 01:05:06.496259 | orchestrator | Monday 02 March 2026 01:04:37 +0000 (0:00:00.056) 0:01:17.398 ********** 2026-03-02 01:05:06.496264 | orchestrator | 2026-03-02 01:05:06.496269 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-02 01:05:06.496273 | orchestrator | Monday 02 March 2026 01:04:37 +0000 (0:00:00.060) 0:01:17.459 ********** 2026-03-02 01:05:06.496278 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:05:06.496283 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:05:06.496287 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:05:06.496292 | orchestrator | 2026-03-02 01:05:06.496296 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-02 01:05:06.496301 | orchestrator | Monday 02 March 2026 01:04:54 +0000 (0:00:16.750) 0:01:34.209 ********** 2026-03-02 01:05:06.496305 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:05:06.496310 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:05:06.496314 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:05:06.496319 | orchestrator | 2026-03-02 01:05:06.496323 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:05:06.496328 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-02 01:05:06.496333 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-02 01:05:06.496338 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-02 01:05:06.496342 | orchestrator | 2026-03-02 01:05:06.496347 | orchestrator | 2026-03-02 01:05:06.496351 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:05:06.496356 | orchestrator | Monday 02 March 2026 01:05:03 +0000 (0:00:09.691) 0:01:43.901 ********** 2026-03-02 01:05:06.496360 | orchestrator | =============================================================================== 2026-03-02 01:05:06.496365 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 16.75s 2026-03-02 01:05:06.496369 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.12s 2026-03-02 01:05:06.496374 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.69s 2026-03-02 01:05:06.496378 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.47s 2026-03-02 01:05:06.496386 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.44s 2026-03-02 01:05:06.496391 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.12s 2026-03-02 01:05:06.496395 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.80s 2026-03-02 01:05:06.496399 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.63s 2026-03-02 01:05:06.496404 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.40s 2026-03-02 01:05:06.496409 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.24s 2026-03-02 01:05:06.496413 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.06s 2026-03-02 01:05:06.496418 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 2.99s 2026-03-02 01:05:06.496422 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.88s 2026-03-02 01:05:06.496427 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.59s 2026-03-02 01:05:06.496431 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.31s 2026-03-02 01:05:06.496436 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.12s 2026-03-02 01:05:06.496440 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.00s 2026-03-02 01:05:06.496445 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 1.97s 2026-03-02 01:05:06.496449 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.91s 2026-03-02 01:05:06.496454 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.44s 2026-03-02 01:05:09.525147 | orchestrator | 2026-03-02 01:05:09 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:09.525222 | orchestrator | 2026-03-02 01:05:09 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:09.525653 | orchestrator | 2026-03-02 01:05:09 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:09.526432 | orchestrator | 2026-03-02 01:05:09 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:09.526459 | orchestrator | 2026-03-02 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:12.572299 | orchestrator | 2026-03-02 01:05:12 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:12.573904 | orchestrator | 2026-03-02 01:05:12 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:12.575570 | orchestrator | 2026-03-02 01:05:12 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:12.578168 | orchestrator | 2026-03-02 01:05:12 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:12.578271 | orchestrator | 2026-03-02 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:15.618063 | orchestrator | 2026-03-02 01:05:15 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:15.620016 | orchestrator | 2026-03-02 01:05:15 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:15.620380 | orchestrator | 2026-03-02 01:05:15 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:15.621488 | orchestrator | 2026-03-02 01:05:15 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:15.621517 | orchestrator | 2026-03-02 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:18.664149 | orchestrator | 2026-03-02 01:05:18 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:18.664334 | orchestrator | 2026-03-02 01:05:18 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:18.665037 | orchestrator | 2026-03-02 01:05:18 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:18.666440 | orchestrator | 2026-03-02 01:05:18 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:18.666480 | orchestrator | 2026-03-02 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:21.701581 | orchestrator | 2026-03-02 01:05:21 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:21.704489 | orchestrator | 2026-03-02 01:05:21 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:21.708430 | orchestrator | 2026-03-02 01:05:21 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:21.711234 | orchestrator | 2026-03-02 01:05:21 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:21.711279 | orchestrator | 2026-03-02 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:24.758712 | orchestrator | 2026-03-02 01:05:24 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:24.760755 | orchestrator | 2026-03-02 01:05:24 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:24.763232 | orchestrator | 2026-03-02 01:05:24 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:24.766262 | orchestrator | 2026-03-02 01:05:24 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:24.766344 | orchestrator | 2026-03-02 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:27.810922 | orchestrator | 2026-03-02 01:05:27 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:27.810981 | orchestrator | 2026-03-02 01:05:27 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:27.812431 | orchestrator | 2026-03-02 01:05:27 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:27.814007 | orchestrator | 2026-03-02 01:05:27 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:27.814078 | orchestrator | 2026-03-02 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:30.867247 | orchestrator | 2026-03-02 01:05:30 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:30.867455 | orchestrator | 2026-03-02 01:05:30 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:30.868561 | orchestrator | 2026-03-02 01:05:30 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:30.868913 | orchestrator | 2026-03-02 01:05:30 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:30.869090 | orchestrator | 2026-03-02 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:33.912519 | orchestrator | 2026-03-02 01:05:33 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:33.912620 | orchestrator | 2026-03-02 01:05:33 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:33.913335 | orchestrator | 2026-03-02 01:05:33 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:33.914132 | orchestrator | 2026-03-02 01:05:33 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:33.914205 | orchestrator | 2026-03-02 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:36.944710 | orchestrator | 2026-03-02 01:05:36 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:36.944886 | orchestrator | 2026-03-02 01:05:36 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:36.945541 | orchestrator | 2026-03-02 01:05:36 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:36.949308 | orchestrator | 2026-03-02 01:05:36 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:36.949378 | orchestrator | 2026-03-02 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:39.985015 | orchestrator | 2026-03-02 01:05:39 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:39.986359 | orchestrator | 2026-03-02 01:05:39 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:39.987406 | orchestrator | 2026-03-02 01:05:39 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:39.988756 | orchestrator | 2026-03-02 01:05:39 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:39.988786 | orchestrator | 2026-03-02 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:43.026121 | orchestrator | 2026-03-02 01:05:43 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:43.027367 | orchestrator | 2026-03-02 01:05:43 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:43.027901 | orchestrator | 2026-03-02 01:05:43 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:43.032453 | orchestrator | 2026-03-02 01:05:43 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:43.032498 | orchestrator | 2026-03-02 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:46.066979 | orchestrator | 2026-03-02 01:05:46 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:46.067758 | orchestrator | 2026-03-02 01:05:46 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:46.068578 | orchestrator | 2026-03-02 01:05:46 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:46.069613 | orchestrator | 2026-03-02 01:05:46 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:46.069670 | orchestrator | 2026-03-02 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:49.094154 | orchestrator | 2026-03-02 01:05:49 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:49.095561 | orchestrator | 2026-03-02 01:05:49 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:49.096158 | orchestrator | 2026-03-02 01:05:49 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:49.097899 | orchestrator | 2026-03-02 01:05:49 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:49.097937 | orchestrator | 2026-03-02 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:52.126482 | orchestrator | 2026-03-02 01:05:52 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:52.127466 | orchestrator | 2026-03-02 01:05:52 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:52.129512 | orchestrator | 2026-03-02 01:05:52 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:52.134107 | orchestrator | 2026-03-02 01:05:52 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:52.134694 | orchestrator | 2026-03-02 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:55.159487 | orchestrator | 2026-03-02 01:05:55 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:55.159750 | orchestrator | 2026-03-02 01:05:55 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:55.160482 | orchestrator | 2026-03-02 01:05:55 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:55.161195 | orchestrator | 2026-03-02 01:05:55 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:55.161224 | orchestrator | 2026-03-02 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:05:58.194334 | orchestrator | 2026-03-02 01:05:58 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:05:58.194539 | orchestrator | 2026-03-02 01:05:58 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:05:58.195257 | orchestrator | 2026-03-02 01:05:58 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:05:58.195992 | orchestrator | 2026-03-02 01:05:58 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:05:58.196076 | orchestrator | 2026-03-02 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:01.281520 | orchestrator | 2026-03-02 01:06:01 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:01.281996 | orchestrator | 2026-03-02 01:06:01 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:01.282530 | orchestrator | 2026-03-02 01:06:01 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:01.283029 | orchestrator | 2026-03-02 01:06:01 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:01.283049 | orchestrator | 2026-03-02 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:04.338897 | orchestrator | 2026-03-02 01:06:04 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:04.339239 | orchestrator | 2026-03-02 01:06:04 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:04.340514 | orchestrator | 2026-03-02 01:06:04 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:04.340982 | orchestrator | 2026-03-02 01:06:04 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:04.341285 | orchestrator | 2026-03-02 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:07.376431 | orchestrator | 2026-03-02 01:06:07 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:07.376865 | orchestrator | 2026-03-02 01:06:07 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:07.377763 | orchestrator | 2026-03-02 01:06:07 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:07.380446 | orchestrator | 2026-03-02 01:06:07 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:07.380489 | orchestrator | 2026-03-02 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:10.423576 | orchestrator | 2026-03-02 01:06:10 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:10.423694 | orchestrator | 2026-03-02 01:06:10 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:10.423701 | orchestrator | 2026-03-02 01:06:10 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:10.423705 | orchestrator | 2026-03-02 01:06:10 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:10.423708 | orchestrator | 2026-03-02 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:13.425481 | orchestrator | 2026-03-02 01:06:13 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:13.425718 | orchestrator | 2026-03-02 01:06:13 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:13.426547 | orchestrator | 2026-03-02 01:06:13 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:13.427159 | orchestrator | 2026-03-02 01:06:13 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:13.427201 | orchestrator | 2026-03-02 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:16.499928 | orchestrator | 2026-03-02 01:06:16 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:16.499984 | orchestrator | 2026-03-02 01:06:16 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:16.500290 | orchestrator | 2026-03-02 01:06:16 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:16.501186 | orchestrator | 2026-03-02 01:06:16 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:16.501227 | orchestrator | 2026-03-02 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:19.555932 | orchestrator | 2026-03-02 01:06:19 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:19.555998 | orchestrator | 2026-03-02 01:06:19 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:19.556009 | orchestrator | 2026-03-02 01:06:19 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:19.556017 | orchestrator | 2026-03-02 01:06:19 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:19.556025 | orchestrator | 2026-03-02 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:22.580699 | orchestrator | 2026-03-02 01:06:22 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:22.582498 | orchestrator | 2026-03-02 01:06:22 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:22.582548 | orchestrator | 2026-03-02 01:06:22 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:22.582990 | orchestrator | 2026-03-02 01:06:22 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:22.583019 | orchestrator | 2026-03-02 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:25.615394 | orchestrator | 2026-03-02 01:06:25 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:25.615761 | orchestrator | 2026-03-02 01:06:25 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:25.616387 | orchestrator | 2026-03-02 01:06:25 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:25.616962 | orchestrator | 2026-03-02 01:06:25 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:25.616979 | orchestrator | 2026-03-02 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:28.673631 | orchestrator | 2026-03-02 01:06:28 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:28.673829 | orchestrator | 2026-03-02 01:06:28 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:28.674678 | orchestrator | 2026-03-02 01:06:28 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:28.675494 | orchestrator | 2026-03-02 01:06:28 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:28.676093 | orchestrator | 2026-03-02 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:31.709668 | orchestrator | 2026-03-02 01:06:31 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:31.712019 | orchestrator | 2026-03-02 01:06:31 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:31.714129 | orchestrator | 2026-03-02 01:06:31 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:31.716244 | orchestrator | 2026-03-02 01:06:31 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:31.716297 | orchestrator | 2026-03-02 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:34.747939 | orchestrator | 2026-03-02 01:06:34 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:34.750096 | orchestrator | 2026-03-02 01:06:34 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:34.751414 | orchestrator | 2026-03-02 01:06:34 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:34.752064 | orchestrator | 2026-03-02 01:06:34 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:34.752097 | orchestrator | 2026-03-02 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:37.788710 | orchestrator | 2026-03-02 01:06:37 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:37.789391 | orchestrator | 2026-03-02 01:06:37 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:37.790184 | orchestrator | 2026-03-02 01:06:37 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:37.791190 | orchestrator | 2026-03-02 01:06:37 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:37.792668 | orchestrator | 2026-03-02 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:40.838690 | orchestrator | 2026-03-02 01:06:40 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:40.838779 | orchestrator | 2026-03-02 01:06:40 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:40.840845 | orchestrator | 2026-03-02 01:06:40 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state STARTED 2026-03-02 01:06:40.841391 | orchestrator | 2026-03-02 01:06:40 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:40.841444 | orchestrator | 2026-03-02 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:43.889804 | orchestrator | 2026-03-02 01:06:43 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:43.891458 | orchestrator | 2026-03-02 01:06:43 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:06:43.894510 | orchestrator | 2026-03-02 01:06:43 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:43.900062 | orchestrator | 2026-03-02 01:06:43 | INFO  | Task 0f7c86c5-9a58-4707-813f-50f9b83f2233 is in state SUCCESS 2026-03-02 01:06:43.903383 | orchestrator | 2026-03-02 01:06:43.903444 | orchestrator | 2026-03-02 01:06:43.903453 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:06:43.903460 | orchestrator | 2026-03-02 01:06:43.903467 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:06:43.903474 | orchestrator | Monday 02 March 2026 01:03:53 +0000 (0:00:00.241) 0:00:00.241 ********** 2026-03-02 01:06:43.903481 | orchestrator | ok: [testbed-manager] 2026-03-02 01:06:43.903488 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:06:43.903494 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:06:43.903500 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:06:43.903506 | orchestrator | ok: [testbed-node-3] 2026-03-02 01:06:43.903513 | orchestrator | ok: [testbed-node-4] 2026-03-02 01:06:43.903531 | orchestrator | ok: [testbed-node-5] 2026-03-02 01:06:43.903538 | orchestrator | 2026-03-02 01:06:43.903641 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:06:43.903682 | orchestrator | Monday 02 March 2026 01:03:54 +0000 (0:00:00.762) 0:00:01.004 ********** 2026-03-02 01:06:43.903691 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-02 01:06:43.903698 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-02 01:06:43.903704 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-02 01:06:43.903711 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-02 01:06:43.903718 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-02 01:06:43.903724 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-02 01:06:43.903731 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-02 01:06:43.903737 | orchestrator | 2026-03-02 01:06:43.903744 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-02 01:06:43.903751 | orchestrator | 2026-03-02 01:06:43.903758 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-02 01:06:43.903793 | orchestrator | Monday 02 March 2026 01:03:54 +0000 (0:00:00.666) 0:00:01.671 ********** 2026-03-02 01:06:43.903799 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 01:06:43.903807 | orchestrator | 2026-03-02 01:06:43.903813 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-02 01:06:43.903819 | orchestrator | Monday 02 March 2026 01:03:56 +0000 (0:00:01.440) 0:00:03.111 ********** 2026-03-02 01:06:43.904017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.904029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.904036 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-02 01:06:43.904059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.904077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.904118 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.904125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.904132 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.904138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.904145 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.904158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.904165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.904179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.904187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.904194 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.904201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.904207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.904220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.904227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.904234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.904244 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.904252 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-02 01:06:43.904349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.904368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.904384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.904500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.904818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.904873 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.904882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.904889 | orchestrator | 2026-03-02 01:06:43.904896 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-02 01:06:43.904903 | orchestrator | Monday 02 March 2026 01:03:58 +0000 (0:00:02.733) 0:00:05.845 ********** 2026-03-02 01:06:43.904910 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 01:06:43.904917 | orchestrator | 2026-03-02 01:06:43.904924 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-02 01:06:43.904931 | orchestrator | Monday 02 March 2026 01:04:00 +0000 (0:00:01.411) 0:00:07.256 ********** 2026-03-02 01:06:43.904938 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-02 01:06:43.904953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.904967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.904975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.905001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.905010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.905017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.905024 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.905037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.905044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.905055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.905064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.905087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.905095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.905102 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.905108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.905120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.905127 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.905137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.905144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.905167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.905176 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-02 01:06:43.905183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.905196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.905202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.905211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.905218 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.905230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.905237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.905618 | orchestrator | 2026-03-02 01:06:43.905635 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-02 01:06:43.905642 | orchestrator | Monday 02 March 2026 01:04:05 +0000 (0:00:05.719) 0:00:12.976 ********** 2026-03-02 01:06:43.905650 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-02 01:06:43.905665 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.905672 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.905684 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-02 01:06:43.905718 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.905726 | orchestrator | skipping: [testbed-manager] 2026-03-02 01:06:43.905733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.905740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.905753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.905760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.905767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.905773 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:06:43.905787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.905794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.905820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.905828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.905840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.905847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.905854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.905861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.905871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.906125 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:06:43.906132 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:06:43.906209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.906230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906244 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.906251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.906258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906276 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.906284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.906291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906333 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.906340 | orchestrator | 2026-03-02 01:06:43.906347 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-02 01:06:43.906355 | orchestrator | Monday 02 March 2026 01:04:07 +0000 (0:00:01.904) 0:00:14.881 ********** 2026-03-02 01:06:43.906362 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-02 01:06:43.906368 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.906375 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906385 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-02 01:06:43.906394 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.906425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.906434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.906440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.906447 | orchestrator | skipping: [testbed-manager] 2026-03-02 01:06:43.906453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.906467 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:06:43.906477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.906483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.906494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.906556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.906573 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:06:43.906580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.906587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.906594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.906605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-02 01:06:43.906623 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:06:43.906649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.906657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906669 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.906675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.906682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906695 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.906706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-02 01:06:43.906718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-02 01:06:43.906751 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.906758 | orchestrator | 2026-03-02 01:06:43.906764 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-02 01:06:43.906771 | orchestrator | Monday 02 March 2026 01:04:10 +0000 (0:00:02.277) 0:00:17.158 ********** 2026-03-02 01:06:43.906778 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-02 01:06:43.906785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.906793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.906799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.906818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.906826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.906854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.906862 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.906869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.906876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.906883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.906890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.906908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.906915 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.906942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.906951 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.906959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.906966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.906974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.906982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.906999 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-02 01:06:43.907025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.907034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.907041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.907048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.907055 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.907066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.907077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.907084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.907091 | orchestrator | 2026-03-02 01:06:43.907098 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-02 01:06:43.907105 | orchestrator | Monday 02 March 2026 01:04:16 +0000 (0:00:05.932) 0:00:23.090 ********** 2026-03-02 01:06:43.907111 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-02 01:06:43.907118 | orchestrator | 2026-03-02 01:06:43.907125 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-02 01:06:43.907151 | orchestrator | Monday 02 March 2026 01:04:17 +0000 (0:00:01.107) 0:00:24.198 ********** 2026-03-02 01:06:43.907160 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1113184, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7822733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907168 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1113184, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7822733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907175 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1113297, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8025944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907187 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1113184, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7822733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907197 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1113297, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8025944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907204 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1113184, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7822733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907228 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1113184, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7822733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.907236 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1113172, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7817523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907243 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1113184, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7822733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907250 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1113297, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8025944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907261 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1113184, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7822733, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907271 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1113172, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7817523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907279 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1113297, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8025944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907302 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1113297, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8025944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907310 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1113285, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7977939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907317 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1113172, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7817523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907324 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1113297, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8025944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907337 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1113285, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7977939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907347 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1113172, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7817523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907354 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1113167, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907377 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1113172, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7817523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907385 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1113167, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907393 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1113285, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7977939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907403 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1113285, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7977939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907410 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1113172, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7817523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907420 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1113185, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7829947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907427 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1113285, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7977939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907434 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1113297, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8025944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.907459 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1113285, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7977939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907468 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1113185, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7829947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907482 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1113167, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907489 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1113167, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907498 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1113279, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7973967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907505 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1113279, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7973967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907511 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1113185, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7829947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907574 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1113167, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907583 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1113167, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907595 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1113191, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7957869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907602 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1113185, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7829947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907612 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1113185, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7829947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907697 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1113279, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7973967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907706 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1113191, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7957869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907733 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1113279, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7973967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907741 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1113279, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7973967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907754 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1113172, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7817523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.907761 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1113185, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7829947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907767 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1113191, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7957869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907777 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1113191, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7957869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907784 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1113180, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7821286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907809 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1113191, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7957869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907820 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1113180, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7821286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907827 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1113180, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7821286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907833 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1113279, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7973967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907839 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1113180, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7821286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907847 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113296, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907853 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113296, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907874 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1113180, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7821286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907884 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113296, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907891 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1113191, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7957869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907898 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113296, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907906 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113162, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907915 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1113285, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7977939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.907921 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113162, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907945 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1113180, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7821286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907956 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113296, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907962 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113162, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907968 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113162, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907974 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1113332, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907982 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113296, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.907988 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113162, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908015 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1113332, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908022 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113162, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908028 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1113332, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908035 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1113332, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908041 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1113332, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908050 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1113291, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908057 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1113332, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908085 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1113291, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908091 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1113291, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908094 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1113291, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908098 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1113291, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908102 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113169, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908111 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1113291, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908115 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1113164, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908125 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113169, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908129 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113169, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908133 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113169, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908138 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113169, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908144 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1113164, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908153 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1113275, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908164 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1113167, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908178 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1113164, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908185 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113169, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908191 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1113164, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908199 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1113164, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908203 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1113275, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908210 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1113272, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908217 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1113275, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908226 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1113275, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908230 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1113275, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908234 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1113272, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908238 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1113164, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908242 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1113326, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908246 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.908252 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1113275, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908259 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1113326, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908263 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:06:43.908271 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1113272, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908275 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1113272, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908279 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1113272, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908283 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1113272, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908287 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1113326, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908297 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.908306 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1113185, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7829947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908314 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1113326, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908320 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:06:43.908332 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1113326, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908340 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.908347 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1113326, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-02 01:06:43.908354 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:06:43.908360 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1113279, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7973967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908365 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1113191, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7957869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908369 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1113180, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7821286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908382 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113296, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908387 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113162, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908398 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1113332, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908405 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1113291, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7986553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908413 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1113169, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7801118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908420 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1113164, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7795975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908432 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1113275, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908442 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1113272, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7962885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908447 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1113326, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.8049738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-02 01:06:43.908452 | orchestrator | 2026-03-02 01:06:43.908457 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-02 01:06:43.908461 | orchestrator | Monday 02 March 2026 01:04:40 +0000 (0:00:22.954) 0:00:47.153 ********** 2026-03-02 01:06:43.908468 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-02 01:06:43.908473 | orchestrator | 2026-03-02 01:06:43.908477 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-02 01:06:43.908482 | orchestrator | Monday 02 March 2026 01:04:40 +0000 (0:00:00.810) 0:00:47.963 ********** 2026-03-02 01:06:43.908487 | orchestrator | [WARNING]: Skipped 2026-03-02 01:06:43.908491 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908498 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-02 01:06:43.908504 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908511 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-02 01:06:43.908534 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-02 01:06:43.908541 | orchestrator | [WARNING]: Skipped 2026-03-02 01:06:43.908549 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908555 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-02 01:06:43.908562 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908568 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-02 01:06:43.908574 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 01:06:43.908579 | orchestrator | [WARNING]: Skipped 2026-03-02 01:06:43.908583 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908588 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-02 01:06:43.908593 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908597 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-02 01:06:43.908602 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-02 01:06:43.908606 | orchestrator | [WARNING]: Skipped 2026-03-02 01:06:43.908615 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908619 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-02 01:06:43.908624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908629 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-02 01:06:43.908633 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-02 01:06:43.908638 | orchestrator | [WARNING]: Skipped 2026-03-02 01:06:43.908642 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908646 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-02 01:06:43.908651 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908655 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-02 01:06:43.908659 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-02 01:06:43.908664 | orchestrator | [WARNING]: Skipped 2026-03-02 01:06:43.908668 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908673 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-02 01:06:43.908678 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908684 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-02 01:06:43.908691 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-02 01:06:43.908698 | orchestrator | [WARNING]: Skipped 2026-03-02 01:06:43.908704 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908710 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-02 01:06:43.908716 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-02 01:06:43.908722 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-02 01:06:43.908728 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-02 01:06:43.908735 | orchestrator | 2026-03-02 01:06:43.908748 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-02 01:06:43.908755 | orchestrator | Monday 02 March 2026 01:04:42 +0000 (0:00:01.921) 0:00:49.885 ********** 2026-03-02 01:06:43.908762 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-02 01:06:43.908770 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:06:43.908776 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-02 01:06:43.908783 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:06:43.908788 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-02 01:06:43.908793 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.908798 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-02 01:06:43.908802 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:06:43.908807 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-02 01:06:43.908812 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.908816 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-02 01:06:43.908820 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.908825 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-02 01:06:43.908830 | orchestrator | 2026-03-02 01:06:43.908834 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-02 01:06:43.908839 | orchestrator | Monday 02 March 2026 01:04:56 +0000 (0:00:13.241) 0:01:03.127 ********** 2026-03-02 01:06:43.908847 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-02 01:06:43.908852 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-02 01:06:43.908863 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:06:43.908867 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:06:43.908871 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-02 01:06:43.908876 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:06:43.908880 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-02 01:06:43.908885 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.908889 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-02 01:06:43.908894 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.908900 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-02 01:06:43.908906 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.908915 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-02 01:06:43.908925 | orchestrator | 2026-03-02 01:06:43.908931 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-02 01:06:43.908937 | orchestrator | Monday 02 March 2026 01:04:59 +0000 (0:00:03.058) 0:01:06.185 ********** 2026-03-02 01:06:43.908943 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-02 01:06:43.908950 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:06:43.908956 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-02 01:06:43.908961 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:06:43.908967 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-02 01:06:43.908973 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.908980 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-02 01:06:43.908986 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:06:43.908993 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-02 01:06:43.908999 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.909005 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-02 01:06:43.909012 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.909019 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-02 01:06:43.909026 | orchestrator | 2026-03-02 01:06:43.909032 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-02 01:06:43.909040 | orchestrator | Monday 02 March 2026 01:05:00 +0000 (0:00:01.553) 0:01:07.739 ********** 2026-03-02 01:06:43.909045 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-02 01:06:43.909049 | orchestrator | 2026-03-02 01:06:43.909054 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-02 01:06:43.909058 | orchestrator | Monday 02 March 2026 01:05:01 +0000 (0:00:00.690) 0:01:08.429 ********** 2026-03-02 01:06:43.909063 | orchestrator | skipping: [testbed-manager] 2026-03-02 01:06:43.909067 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:06:43.909072 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:06:43.909076 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:06:43.909084 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.909088 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.909093 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.909102 | orchestrator | 2026-03-02 01:06:43.909106 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-02 01:06:43.909111 | orchestrator | Monday 02 March 2026 01:05:02 +0000 (0:00:00.557) 0:01:08.986 ********** 2026-03-02 01:06:43.909115 | orchestrator | skipping: [testbed-manager] 2026-03-02 01:06:43.909120 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.909124 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.909129 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.909133 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:06:43.909137 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:06:43.909142 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:06:43.909146 | orchestrator | 2026-03-02 01:06:43.909151 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-02 01:06:43.909156 | orchestrator | Monday 02 March 2026 01:05:04 +0000 (0:00:02.204) 0:01:11.190 ********** 2026-03-02 01:06:43.909160 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-02 01:06:43.909165 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-02 01:06:43.909170 | orchestrator | skipping: [testbed-manager] 2026-03-02 01:06:43.909176 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:06:43.909182 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-02 01:06:43.909188 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:06:43.909194 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-02 01:06:43.909201 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:06:43.909212 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-02 01:06:43.909219 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.909225 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-02 01:06:43.909231 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.909238 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-02 01:06:43.909246 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.909252 | orchestrator | 2026-03-02 01:06:43.909259 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-02 01:06:43.909265 | orchestrator | Monday 02 March 2026 01:05:05 +0000 (0:00:01.494) 0:01:12.685 ********** 2026-03-02 01:06:43.909270 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-02 01:06:43.909364 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:06:43.909370 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-02 01:06:43.909374 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:06:43.909379 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-02 01:06:43.909384 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:06:43.909388 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-02 01:06:43.909393 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.909397 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-02 01:06:43.909401 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.909406 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-02 01:06:43.909410 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.909415 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-02 01:06:43.909419 | orchestrator | 2026-03-02 01:06:43.909424 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-02 01:06:43.909433 | orchestrator | Monday 02 March 2026 01:05:07 +0000 (0:00:01.404) 0:01:14.089 ********** 2026-03-02 01:06:43.909438 | orchestrator | [WARNING]: Skipped 2026-03-02 01:06:43.909442 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-02 01:06:43.909447 | orchestrator | due to this access issue: 2026-03-02 01:06:43.909451 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-02 01:06:43.909456 | orchestrator | not a directory 2026-03-02 01:06:43.909461 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-02 01:06:43.909465 | orchestrator | 2026-03-02 01:06:43.909469 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-02 01:06:43.909474 | orchestrator | Monday 02 March 2026 01:05:08 +0000 (0:00:01.000) 0:01:15.089 ********** 2026-03-02 01:06:43.909478 | orchestrator | skipping: [testbed-manager] 2026-03-02 01:06:43.909484 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:06:43.909491 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:06:43.909501 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:06:43.909508 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.909527 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.909534 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.909540 | orchestrator | 2026-03-02 01:06:43.909547 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-02 01:06:43.909553 | orchestrator | Monday 02 March 2026 01:05:08 +0000 (0:00:00.852) 0:01:15.942 ********** 2026-03-02 01:06:43.909559 | orchestrator | skipping: [testbed-manager] 2026-03-02 01:06:43.909565 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:06:43.909571 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:06:43.909582 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:06:43.909589 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:06:43.909595 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:06:43.909601 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:06:43.909608 | orchestrator | 2026-03-02 01:06:43.909613 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-02 01:06:43.909618 | orchestrator | Monday 02 March 2026 01:05:09 +0000 (0:00:00.832) 0:01:16.775 ********** 2026-03-02 01:06:43.909624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.909635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.909641 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-02 01:06:43.909650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.909655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.909660 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.909667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.909673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.909677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.909686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-02 01:06:43.909691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.909699 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.909704 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.909710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.909716 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.909721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.909726 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.909734 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-02 01:06:43.909743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.909748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.909752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.909759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.909764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.909769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.909776 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.909784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-02 01:06:43.909789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.909793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.909798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-02 01:06:43.909803 | orchestrator | 2026-03-02 01:06:43.909807 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-02 01:06:43.909812 | orchestrator | Monday 02 March 2026 01:05:13 +0000 (0:00:03.999) 0:01:20.774 ********** 2026-03-02 01:06:43.909817 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-02 01:06:43.909821 | orchestrator | skipping: [testbed-manager] 2026-03-02 01:06:43.909826 | orchestrator | 2026-03-02 01:06:43.909830 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-02 01:06:43.909837 | orchestrator | Monday 02 March 2026 01:05:14 +0000 (0:00:01.115) 0:01:21.890 ********** 2026-03-02 01:06:43.909843 | orchestrator | 2026-03-02 01:06:43.909850 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-02 01:06:43.909856 | orchestrator | Monday 02 March 2026 01:05:14 +0000 (0:00:00.067) 0:01:21.958 ********** 2026-03-02 01:06:43.909862 | orchestrator | 2026-03-02 01:06:43.909868 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-02 01:06:43.909874 | orchestrator | Monday 02 March 2026 01:05:15 +0000 (0:00:00.065) 0:01:22.024 ********** 2026-03-02 01:06:43.909881 | orchestrator | 2026-03-02 01:06:43.909887 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-02 01:06:43.909894 | orchestrator | Monday 02 March 2026 01:05:15 +0000 (0:00:00.063) 0:01:22.088 ********** 2026-03-02 01:06:43.909900 | orchestrator | 2026-03-02 01:06:43.909907 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-02 01:06:43.909914 | orchestrator | Monday 02 March 2026 01:05:15 +0000 (0:00:00.222) 0:01:22.310 ********** 2026-03-02 01:06:43.909925 | orchestrator | 2026-03-02 01:06:43.909932 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-02 01:06:43.909939 | orchestrator | Monday 02 March 2026 01:05:15 +0000 (0:00:00.061) 0:01:22.372 ********** 2026-03-02 01:06:43.909945 | orchestrator | 2026-03-02 01:06:43.909952 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-02 01:06:43.909959 | orchestrator | Monday 02 March 2026 01:05:15 +0000 (0:00:00.063) 0:01:22.436 ********** 2026-03-02 01:06:43.909966 | orchestrator | 2026-03-02 01:06:43.909973 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-02 01:06:43.909979 | orchestrator | Monday 02 March 2026 01:05:15 +0000 (0:00:00.084) 0:01:22.521 ********** 2026-03-02 01:06:43.909986 | orchestrator | changed: [testbed-manager] 2026-03-02 01:06:43.909994 | orchestrator | 2026-03-02 01:06:43.910001 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-02 01:06:43.910033 | orchestrator | Monday 02 March 2026 01:05:33 +0000 (0:00:18.424) 0:01:40.945 ********** 2026-03-02 01:06:43.910042 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:06:43.910049 | orchestrator | changed: [testbed-manager] 2026-03-02 01:06:43.910056 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:06:43.910063 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:06:43.910069 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:06:43.910075 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:06:43.910082 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:06:43.910089 | orchestrator | 2026-03-02 01:06:43.910096 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-02 01:06:43.910103 | orchestrator | Monday 02 March 2026 01:05:46 +0000 (0:00:12.925) 0:01:53.871 ********** 2026-03-02 01:06:43.910110 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:06:43.910114 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:06:43.910118 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:06:43.910123 | orchestrator | 2026-03-02 01:06:43.910129 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-02 01:06:43.910136 | orchestrator | Monday 02 March 2026 01:05:52 +0000 (0:00:05.189) 0:01:59.061 ********** 2026-03-02 01:06:43.910143 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:06:43.910150 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:06:43.910156 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:06:43.910163 | orchestrator | 2026-03-02 01:06:43.910169 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-02 01:06:43.910174 | orchestrator | Monday 02 March 2026 01:05:57 +0000 (0:00:05.094) 0:02:04.155 ********** 2026-03-02 01:06:43.910178 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:06:43.910183 | orchestrator | changed: [testbed-manager] 2026-03-02 01:06:43.910187 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:06:43.910191 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:06:43.910196 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:06:43.910200 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:06:43.910205 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:06:43.910209 | orchestrator | 2026-03-02 01:06:43.910213 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-02 01:06:43.910218 | orchestrator | Monday 02 March 2026 01:06:10 +0000 (0:00:13.508) 0:02:17.664 ********** 2026-03-02 01:06:43.910222 | orchestrator | changed: [testbed-manager] 2026-03-02 01:06:43.910228 | orchestrator | 2026-03-02 01:06:43.910235 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-02 01:06:43.910242 | orchestrator | Monday 02 March 2026 01:06:21 +0000 (0:00:11.087) 0:02:28.752 ********** 2026-03-02 01:06:43.910248 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:06:43.910255 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:06:43.910262 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:06:43.910269 | orchestrator | 2026-03-02 01:06:43.910275 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-02 01:06:43.910287 | orchestrator | Monday 02 March 2026 01:06:26 +0000 (0:00:05.172) 0:02:33.924 ********** 2026-03-02 01:06:43.910292 | orchestrator | changed: [testbed-manager] 2026-03-02 01:06:43.910297 | orchestrator | 2026-03-02 01:06:43.910301 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-02 01:06:43.910305 | orchestrator | Monday 02 March 2026 01:06:30 +0000 (0:00:03.883) 0:02:37.808 ********** 2026-03-02 01:06:43.910310 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:06:43.910316 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:06:43.910322 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:06:43.910329 | orchestrator | 2026-03-02 01:06:43.910336 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:06:43.910344 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-02 01:06:43.910351 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-02 01:06:43.910360 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-02 01:06:43.910365 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-02 01:06:43.910370 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-02 01:06:43.910375 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-02 01:06:43.910379 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-02 01:06:43.910384 | orchestrator | 2026-03-02 01:06:43.910388 | orchestrator | 2026-03-02 01:06:43.910392 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:06:43.910397 | orchestrator | Monday 02 March 2026 01:06:40 +0000 (0:00:09.854) 0:02:47.662 ********** 2026-03-02 01:06:43.910402 | orchestrator | =============================================================================== 2026-03-02 01:06:43.910406 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.96s 2026-03-02 01:06:43.910411 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.42s 2026-03-02 01:06:43.910415 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.51s 2026-03-02 01:06:43.910420 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.24s 2026-03-02 01:06:43.910428 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.93s 2026-03-02 01:06:43.910433 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.09s 2026-03-02 01:06:43.910439 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.85s 2026-03-02 01:06:43.910446 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.93s 2026-03-02 01:06:43.910452 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.72s 2026-03-02 01:06:43.910459 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.19s 2026-03-02 01:06:43.910466 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.17s 2026-03-02 01:06:43.910473 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.09s 2026-03-02 01:06:43.910479 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.00s 2026-03-02 01:06:43.910486 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 3.88s 2026-03-02 01:06:43.910491 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.06s 2026-03-02 01:06:43.910499 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.73s 2026-03-02 01:06:43.910504 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.28s 2026-03-02 01:06:43.910508 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.20s 2026-03-02 01:06:43.910512 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.92s 2026-03-02 01:06:43.910550 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.90s 2026-03-02 01:06:43.910556 | orchestrator | 2026-03-02 01:06:43 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:43.910561 | orchestrator | 2026-03-02 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:46.946386 | orchestrator | 2026-03-02 01:06:46 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:46.947766 | orchestrator | 2026-03-02 01:06:46 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:06:46.948763 | orchestrator | 2026-03-02 01:06:46 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:46.949809 | orchestrator | 2026-03-02 01:06:46 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:46.949873 | orchestrator | 2026-03-02 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:49.998323 | orchestrator | 2026-03-02 01:06:49 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:50.000091 | orchestrator | 2026-03-02 01:06:50 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:06:50.002221 | orchestrator | 2026-03-02 01:06:50 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:50.003312 | orchestrator | 2026-03-02 01:06:50 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:50.003389 | orchestrator | 2026-03-02 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:53.055883 | orchestrator | 2026-03-02 01:06:53 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state STARTED 2026-03-02 01:06:53.055994 | orchestrator | 2026-03-02 01:06:53 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:06:53.056007 | orchestrator | 2026-03-02 01:06:53 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:53.056015 | orchestrator | 2026-03-02 01:06:53 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:53.056023 | orchestrator | 2026-03-02 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:56.085110 | orchestrator | 2026-03-02 01:06:56 | INFO  | Task c0655a63-5aa9-412f-bc02-e040a33e611a is in state SUCCESS 2026-03-02 01:06:56.087903 | orchestrator | 2026-03-02 01:06:56 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:06:56.089683 | orchestrator | 2026-03-02 01:06:56 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:56.091848 | orchestrator | 2026-03-02 01:06:56 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:06:56.093948 | orchestrator | 2026-03-02 01:06:56 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:56.094099 | orchestrator | 2026-03-02 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:06:59.134629 | orchestrator | 2026-03-02 01:06:59 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:06:59.136237 | orchestrator | 2026-03-02 01:06:59 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:06:59.137924 | orchestrator | 2026-03-02 01:06:59 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:06:59.139661 | orchestrator | 2026-03-02 01:06:59 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:06:59.139718 | orchestrator | 2026-03-02 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:02.186545 | orchestrator | 2026-03-02 01:07:02 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:02.188797 | orchestrator | 2026-03-02 01:07:02 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:02.191791 | orchestrator | 2026-03-02 01:07:02 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:02.193774 | orchestrator | 2026-03-02 01:07:02 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:02.194094 | orchestrator | 2026-03-02 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:05.235840 | orchestrator | 2026-03-02 01:07:05 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:05.237722 | orchestrator | 2026-03-02 01:07:05 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:05.239430 | orchestrator | 2026-03-02 01:07:05 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:05.241107 | orchestrator | 2026-03-02 01:07:05 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:05.241387 | orchestrator | 2026-03-02 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:08.282045 | orchestrator | 2026-03-02 01:07:08 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:08.285654 | orchestrator | 2026-03-02 01:07:08 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:08.287889 | orchestrator | 2026-03-02 01:07:08 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:08.290054 | orchestrator | 2026-03-02 01:07:08 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:08.290105 | orchestrator | 2026-03-02 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:11.331062 | orchestrator | 2026-03-02 01:07:11 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:11.333975 | orchestrator | 2026-03-02 01:07:11 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:11.335271 | orchestrator | 2026-03-02 01:07:11 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:11.337239 | orchestrator | 2026-03-02 01:07:11 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:11.337305 | orchestrator | 2026-03-02 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:14.379629 | orchestrator | 2026-03-02 01:07:14 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:14.379838 | orchestrator | 2026-03-02 01:07:14 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:14.380743 | orchestrator | 2026-03-02 01:07:14 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:14.381656 | orchestrator | 2026-03-02 01:07:14 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:14.381686 | orchestrator | 2026-03-02 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:17.415783 | orchestrator | 2026-03-02 01:07:17 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:17.418048 | orchestrator | 2026-03-02 01:07:17 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:17.420383 | orchestrator | 2026-03-02 01:07:17 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:17.422134 | orchestrator | 2026-03-02 01:07:17 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:17.422472 | orchestrator | 2026-03-02 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:20.447089 | orchestrator | 2026-03-02 01:07:20 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:20.448048 | orchestrator | 2026-03-02 01:07:20 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:20.449026 | orchestrator | 2026-03-02 01:07:20 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:20.450517 | orchestrator | 2026-03-02 01:07:20 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:20.450554 | orchestrator | 2026-03-02 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:23.479422 | orchestrator | 2026-03-02 01:07:23 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:23.479712 | orchestrator | 2026-03-02 01:07:23 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:23.481135 | orchestrator | 2026-03-02 01:07:23 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:23.481692 | orchestrator | 2026-03-02 01:07:23 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:23.481787 | orchestrator | 2026-03-02 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:26.526206 | orchestrator | 2026-03-02 01:07:26 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:26.528714 | orchestrator | 2026-03-02 01:07:26 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:26.530582 | orchestrator | 2026-03-02 01:07:26 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:26.532020 | orchestrator | 2026-03-02 01:07:26 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:26.532467 | orchestrator | 2026-03-02 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:29.567573 | orchestrator | 2026-03-02 01:07:29 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:29.569068 | orchestrator | 2026-03-02 01:07:29 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:29.570179 | orchestrator | 2026-03-02 01:07:29 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:29.571910 | orchestrator | 2026-03-02 01:07:29 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:29.571939 | orchestrator | 2026-03-02 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:32.606976 | orchestrator | 2026-03-02 01:07:32 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:32.609293 | orchestrator | 2026-03-02 01:07:32 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:32.611592 | orchestrator | 2026-03-02 01:07:32 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:32.613305 | orchestrator | 2026-03-02 01:07:32 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:32.613769 | orchestrator | 2026-03-02 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:35.674680 | orchestrator | 2026-03-02 01:07:35 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:35.676035 | orchestrator | 2026-03-02 01:07:35 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:35.677800 | orchestrator | 2026-03-02 01:07:35 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:35.683810 | orchestrator | 2026-03-02 01:07:35 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:35.683858 | orchestrator | 2026-03-02 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:38.722444 | orchestrator | 2026-03-02 01:07:38 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:38.724364 | orchestrator | 2026-03-02 01:07:38 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:38.726722 | orchestrator | 2026-03-02 01:07:38 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:38.728058 | orchestrator | 2026-03-02 01:07:38 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state STARTED 2026-03-02 01:07:38.728101 | orchestrator | 2026-03-02 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:41.770607 | orchestrator | 2026-03-02 01:07:41 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:07:41.770878 | orchestrator | 2026-03-02 01:07:41 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:41.771560 | orchestrator | 2026-03-02 01:07:41 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:41.772249 | orchestrator | 2026-03-02 01:07:41 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:41.773973 | orchestrator | 2026-03-02 01:07:41.774100 | orchestrator | 2026-03-02 01:07:41.774115 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-02 01:07:41.774123 | orchestrator | 2026-03-02 01:07:41.774129 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-02 01:07:41.774136 | orchestrator | Monday 02 March 2026 01:00:23 +0000 (0:00:00.113) 0:00:00.113 ********** 2026-03-02 01:07:41.774143 | orchestrator | changed: [localhost] 2026-03-02 01:07:41.774150 | orchestrator | 2026-03-02 01:07:41.774157 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-02 01:07:41.774161 | orchestrator | Monday 02 March 2026 01:00:24 +0000 (0:00:00.915) 0:00:01.029 ********** 2026-03-02 01:07:41.774165 | orchestrator | 2026-03-02 01:07:41.774178 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-02 01:07:41.774182 | orchestrator | 2026-03-02 01:07:41.774190 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-02 01:07:41.774194 | orchestrator | 2026-03-02 01:07:41.774198 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-02 01:07:41.774202 | orchestrator | 2026-03-02 01:07:41.774206 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-02 01:07:41.774210 | orchestrator | 2026-03-02 01:07:41.774214 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-02 01:07:41.774217 | orchestrator | 2026-03-02 01:07:41.774221 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-02 01:07:41.774225 | orchestrator | 2026-03-02 01:07:41.774229 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-02 01:07:41.774233 | orchestrator | changed: [localhost] 2026-03-02 01:07:41.774236 | orchestrator | 2026-03-02 01:07:41.774253 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-02 01:07:41.774257 | orchestrator | Monday 02 March 2026 01:06:17 +0000 (0:05:52.573) 0:05:53.602 ********** 2026-03-02 01:07:41.774261 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-03-02 01:07:41.774264 | orchestrator | changed: [localhost] 2026-03-02 01:07:41.774268 | orchestrator | 2026-03-02 01:07:41.774273 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:07:41.774276 | orchestrator | 2026-03-02 01:07:41.774280 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:07:41.774284 | orchestrator | Monday 02 March 2026 01:06:53 +0000 (0:00:35.560) 0:06:29.163 ********** 2026-03-02 01:07:41.774288 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:07:41.774291 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:07:41.774295 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:07:41.774299 | orchestrator | 2026-03-02 01:07:41.774303 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:07:41.774307 | orchestrator | Monday 02 March 2026 01:06:53 +0000 (0:00:00.298) 0:06:29.462 ********** 2026-03-02 01:07:41.774310 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-02 01:07:41.774315 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-02 01:07:41.774318 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-02 01:07:41.774322 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-02 01:07:41.774326 | orchestrator | 2026-03-02 01:07:41.774330 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-02 01:07:41.774334 | orchestrator | skipping: no hosts matched 2026-03-02 01:07:41.774338 | orchestrator | 2026-03-02 01:07:41.774342 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:07:41.774346 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:07:41.774350 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:07:41.774361 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:07:41.774365 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:07:41.774371 | orchestrator | 2026-03-02 01:07:41.774377 | orchestrator | 2026-03-02 01:07:41.774384 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:07:41.774419 | orchestrator | Monday 02 March 2026 01:06:53 +0000 (0:00:00.499) 0:06:29.962 ********** 2026-03-02 01:07:41.774426 | orchestrator | =============================================================================== 2026-03-02 01:07:41.774432 | orchestrator | Download ironic-agent initramfs --------------------------------------- 352.57s 2026-03-02 01:07:41.774439 | orchestrator | Download ironic-agent kernel ------------------------------------------- 35.56s 2026-03-02 01:07:41.774445 | orchestrator | Ensure the destination directory exists --------------------------------- 0.92s 2026-03-02 01:07:41.774451 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-03-02 01:07:41.774463 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-03-02 01:07:41.774470 | orchestrator | 2026-03-02 01:07:41.774476 | orchestrator | 2026-03-02 01:07:41 | INFO  | Task 0d64f8b9-ff78-434e-92d4-1e3077fb9c31 is in state SUCCESS 2026-03-02 01:07:41.775443 | orchestrator | 2026-03-02 01:07:41.775467 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:07:41.775472 | orchestrator | 2026-03-02 01:07:41.775476 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:07:41.775480 | orchestrator | Monday 02 March 2026 01:05:02 +0000 (0:00:00.193) 0:00:00.193 ********** 2026-03-02 01:07:41.775494 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:07:41.775499 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:07:41.775502 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:07:41.775506 | orchestrator | 2026-03-02 01:07:41.775510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:07:41.775514 | orchestrator | Monday 02 March 2026 01:05:02 +0000 (0:00:00.326) 0:00:00.520 ********** 2026-03-02 01:07:41.775517 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-02 01:07:41.775522 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-02 01:07:41.775526 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-02 01:07:41.775529 | orchestrator | 2026-03-02 01:07:41.775533 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-02 01:07:41.775537 | orchestrator | 2026-03-02 01:07:41.775541 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-02 01:07:41.775544 | orchestrator | Monday 02 March 2026 01:05:03 +0000 (0:00:00.393) 0:00:00.914 ********** 2026-03-02 01:07:41.775548 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:07:41.775552 | orchestrator | 2026-03-02 01:07:41.775556 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-02 01:07:41.775560 | orchestrator | Monday 02 March 2026 01:05:03 +0000 (0:00:00.452) 0:00:01.366 ********** 2026-03-02 01:07:41.775563 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-02 01:07:41.775567 | orchestrator | 2026-03-02 01:07:41.775643 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-02 01:07:41.775653 | orchestrator | Monday 02 March 2026 01:05:07 +0000 (0:00:03.892) 0:00:05.259 ********** 2026-03-02 01:07:41.775659 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-02 01:07:41.775666 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-02 01:07:41.775672 | orchestrator | 2026-03-02 01:07:41.775679 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-02 01:07:41.775683 | orchestrator | Monday 02 March 2026 01:05:14 +0000 (0:00:06.577) 0:00:11.837 ********** 2026-03-02 01:07:41.775687 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-02 01:07:41.775691 | orchestrator | 2026-03-02 01:07:41.775695 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-02 01:07:41.775699 | orchestrator | Monday 02 March 2026 01:05:17 +0000 (0:00:03.620) 0:00:15.458 ********** 2026-03-02 01:07:41.775703 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-02 01:07:41.775710 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-02 01:07:41.775718 | orchestrator | 2026-03-02 01:07:41.775727 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-02 01:07:41.775733 | orchestrator | Monday 02 March 2026 01:05:22 +0000 (0:00:04.411) 0:00:19.869 ********** 2026-03-02 01:07:41.775739 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-02 01:07:41.775744 | orchestrator | 2026-03-02 01:07:41.775750 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-02 01:07:41.775756 | orchestrator | Monday 02 March 2026 01:05:26 +0000 (0:00:04.025) 0:00:23.895 ********** 2026-03-02 01:07:41.775761 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-02 01:07:41.775767 | orchestrator | 2026-03-02 01:07:41.775773 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-02 01:07:41.775779 | orchestrator | Monday 02 March 2026 01:05:29 +0000 (0:00:03.875) 0:00:27.770 ********** 2026-03-02 01:07:41.775806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.775819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.775827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.775836 | orchestrator | 2026-03-02 01:07:41.775842 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-02 01:07:41.775849 | orchestrator | Monday 02 March 2026 01:05:33 +0000 (0:00:03.606) 0:00:31.377 ********** 2026-03-02 01:07:41.775856 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:07:41.775862 | orchestrator | 2026-03-02 01:07:41.775869 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-02 01:07:41.775881 | orchestrator | Monday 02 March 2026 01:05:34 +0000 (0:00:00.694) 0:00:32.072 ********** 2026-03-02 01:07:41.775888 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:07:41.775895 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:41.775901 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:07:41.775908 | orchestrator | 2026-03-02 01:07:41.775914 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-02 01:07:41.775917 | orchestrator | Monday 02 March 2026 01:05:40 +0000 (0:00:06.000) 0:00:38.073 ********** 2026-03-02 01:07:41.775921 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-02 01:07:41.775925 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-02 01:07:41.775929 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-02 01:07:41.775932 | orchestrator | 2026-03-02 01:07:41.775937 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-02 01:07:41.775944 | orchestrator | Monday 02 March 2026 01:05:42 +0000 (0:00:01.934) 0:00:40.008 ********** 2026-03-02 01:07:41.775949 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-02 01:07:41.775956 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-02 01:07:41.775962 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-02 01:07:41.775968 | orchestrator | 2026-03-02 01:07:41.775975 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-02 01:07:41.775981 | orchestrator | Monday 02 March 2026 01:05:43 +0000 (0:00:00.959) 0:00:40.967 ********** 2026-03-02 01:07:41.775987 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:07:41.775994 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:07:41.776000 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:07:41.776007 | orchestrator | 2026-03-02 01:07:41.776013 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-02 01:07:41.776020 | orchestrator | Monday 02 March 2026 01:05:43 +0000 (0:00:00.793) 0:00:41.761 ********** 2026-03-02 01:07:41.776026 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776032 | orchestrator | 2026-03-02 01:07:41.776037 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-02 01:07:41.776044 | orchestrator | Monday 02 March 2026 01:05:44 +0000 (0:00:00.128) 0:00:41.890 ********** 2026-03-02 01:07:41.776050 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776055 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:41.776066 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:41.776071 | orchestrator | 2026-03-02 01:07:41.776077 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-02 01:07:41.776084 | orchestrator | Monday 02 March 2026 01:05:44 +0000 (0:00:00.321) 0:00:42.212 ********** 2026-03-02 01:07:41.776090 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:07:41.776096 | orchestrator | 2026-03-02 01:07:41.776101 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-02 01:07:41.776108 | orchestrator | Monday 02 March 2026 01:05:44 +0000 (0:00:00.533) 0:00:42.745 ********** 2026-03-02 01:07:41.776119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.776132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.776145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.776152 | orchestrator | 2026-03-02 01:07:41.776159 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-02 01:07:41.776165 | orchestrator | Monday 02 March 2026 01:05:50 +0000 (0:00:05.568) 0:00:48.314 ********** 2026-03-02 01:07:41.776176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-02 01:07:41.776184 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-02 01:07:41.776201 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:41.776214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-02 01:07:41.776221 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:41.776228 | orchestrator | 2026-03-02 01:07:41.776235 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-02 01:07:41.776241 | orchestrator | Monday 02 March 2026 01:05:54 +0000 (0:00:03.538) 0:00:51.853 ********** 2026-03-02 01:07:41.776248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-02 01:07:41.776259 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:41.776272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-02 01:07:41.776278 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-02 01:07:41.776294 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:41.776298 | orchestrator | 2026-03-02 01:07:41.776303 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-02 01:07:41.776307 | orchestrator | Monday 02 March 2026 01:05:57 +0000 (0:00:03.117) 0:00:54.970 ********** 2026-03-02 01:07:41.776313 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776319 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:41.776325 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:41.776332 | orchestrator | 2026-03-02 01:07:41.776338 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-02 01:07:41.776343 | orchestrator | Monday 02 March 2026 01:06:03 +0000 (0:00:06.553) 0:01:01.524 ********** 2026-03-02 01:07:41.776350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.776359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.776367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.776372 | orchestrator | 2026-03-02 01:07:41.776379 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-02 01:07:41.776383 | orchestrator | Monday 02 March 2026 01:06:07 +0000 (0:00:03.879) 0:01:05.404 ********** 2026-03-02 01:07:41.776403 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:07:41.776409 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:07:41.776415 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:41.776421 | orchestrator | 2026-03-02 01:07:41.776427 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-02 01:07:41.776434 | orchestrator | Monday 02 March 2026 01:06:13 +0000 (0:00:05.988) 0:01:11.393 ********** 2026-03-02 01:07:41.776440 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:41.776448 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776455 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:41.776462 | orchestrator | 2026-03-02 01:07:41.776469 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-02 01:07:41.776475 | orchestrator | Monday 02 March 2026 01:06:18 +0000 (0:00:04.995) 0:01:16.388 ********** 2026-03-02 01:07:41.776482 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:41.776487 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776492 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:41.776497 | orchestrator | 2026-03-02 01:07:41.776502 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-02 01:07:41.776506 | orchestrator | Monday 02 March 2026 01:06:21 +0000 (0:00:03.400) 0:01:19.788 ********** 2026-03-02 01:07:41.776510 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776517 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:41.776526 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:41.776531 | orchestrator | 2026-03-02 01:07:41.776535 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-02 01:07:41.776540 | orchestrator | Monday 02 March 2026 01:06:25 +0000 (0:00:03.976) 0:01:23.765 ********** 2026-03-02 01:07:41.776545 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:41.776549 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776554 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:41.776558 | orchestrator | 2026-03-02 01:07:41.776563 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-02 01:07:41.776567 | orchestrator | Monday 02 March 2026 01:06:28 +0000 (0:00:02.652) 0:01:26.417 ********** 2026-03-02 01:07:41.776572 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776576 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:41.776581 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:41.776585 | orchestrator | 2026-03-02 01:07:41.776590 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-02 01:07:41.776595 | orchestrator | Monday 02 March 2026 01:06:28 +0000 (0:00:00.293) 0:01:26.711 ********** 2026-03-02 01:07:41.776602 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-02 01:07:41.776608 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:41.776614 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-02 01:07:41.776620 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:41.776626 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-02 01:07:41.776634 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776640 | orchestrator | 2026-03-02 01:07:41.776647 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-02 01:07:41.776653 | orchestrator | Monday 02 March 2026 01:06:32 +0000 (0:00:03.590) 0:01:30.302 ********** 2026-03-02 01:07:41.776659 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:41.776666 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:07:41.776672 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:07:41.776679 | orchestrator | 2026-03-02 01:07:41.776685 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-02 01:07:41.776692 | orchestrator | Monday 02 March 2026 01:06:36 +0000 (0:00:04.060) 0:01:34.362 ********** 2026-03-02 01:07:41.776703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.776716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.776721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-02 01:07:41.776725 | orchestrator | 2026-03-02 01:07:41.776729 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-02 01:07:41.776733 | orchestrator | Monday 02 March 2026 01:06:40 +0000 (0:00:03.460) 0:01:37.822 ********** 2026-03-02 01:07:41.776737 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:41.776740 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:41.776744 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:41.776748 | orchestrator | 2026-03-02 01:07:41.776751 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-02 01:07:41.776758 | orchestrator | Monday 02 March 2026 01:06:40 +0000 (0:00:00.278) 0:01:38.100 ********** 2026-03-02 01:07:41.776761 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:41.776765 | orchestrator | 2026-03-02 01:07:41.776771 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-02 01:07:41.776775 | orchestrator | Monday 02 March 2026 01:06:42 +0000 (0:00:02.354) 0:01:40.455 ********** 2026-03-02 01:07:41.776778 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:41.776782 | orchestrator | 2026-03-02 01:07:41.776786 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-02 01:07:41.776790 | orchestrator | Monday 02 March 2026 01:06:44 +0000 (0:00:02.198) 0:01:42.654 ********** 2026-03-02 01:07:41.776793 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:41.776797 | orchestrator | 2026-03-02 01:07:41.776801 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-02 01:07:41.776804 | orchestrator | Monday 02 March 2026 01:06:46 +0000 (0:00:01.995) 0:01:44.650 ********** 2026-03-02 01:07:41.776808 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:41.776812 | orchestrator | 2026-03-02 01:07:41.776815 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-02 01:07:41.776819 | orchestrator | Monday 02 March 2026 01:07:12 +0000 (0:00:25.570) 0:02:10.221 ********** 2026-03-02 01:07:41.776823 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:41.776827 | orchestrator | 2026-03-02 01:07:41.776830 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-02 01:07:41.776834 | orchestrator | Monday 02 March 2026 01:07:14 +0000 (0:00:02.519) 0:02:12.741 ********** 2026-03-02 01:07:41.776838 | orchestrator | 2026-03-02 01:07:41.776845 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-02 01:07:41.776849 | orchestrator | Monday 02 March 2026 01:07:15 +0000 (0:00:00.056) 0:02:12.797 ********** 2026-03-02 01:07:41.776852 | orchestrator | 2026-03-02 01:07:41.776856 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-02 01:07:41.776860 | orchestrator | Monday 02 March 2026 01:07:15 +0000 (0:00:00.059) 0:02:12.856 ********** 2026-03-02 01:07:41.776864 | orchestrator | 2026-03-02 01:07:41.776868 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-02 01:07:41.776872 | orchestrator | Monday 02 March 2026 01:07:15 +0000 (0:00:00.059) 0:02:12.916 ********** 2026-03-02 01:07:41.776876 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:41.776879 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:07:41.776883 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:07:41.776887 | orchestrator | 2026-03-02 01:07:41.776891 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:07:41.776895 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-02 01:07:41.776899 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-02 01:07:41.776903 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-02 01:07:41.776907 | orchestrator | 2026-03-02 01:07:41.776911 | orchestrator | 2026-03-02 01:07:41.776914 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:07:41.776918 | orchestrator | Monday 02 March 2026 01:07:39 +0000 (0:00:24.315) 0:02:37.231 ********** 2026-03-02 01:07:41.776922 | orchestrator | =============================================================================== 2026-03-02 01:07:41.776926 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.57s 2026-03-02 01:07:41.776929 | orchestrator | glance : Restart glance-api container ---------------------------------- 24.32s 2026-03-02 01:07:41.776933 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.58s 2026-03-02 01:07:41.776940 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 6.55s 2026-03-02 01:07:41.776944 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.00s 2026-03-02 01:07:41.776947 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.99s 2026-03-02 01:07:41.776951 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.57s 2026-03-02 01:07:41.776955 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.00s 2026-03-02 01:07:41.776959 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.41s 2026-03-02 01:07:41.776962 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.06s 2026-03-02 01:07:41.776966 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 4.03s 2026-03-02 01:07:41.776970 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.98s 2026-03-02 01:07:41.776974 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.89s 2026-03-02 01:07:41.776977 | orchestrator | glance : Copying over config.json files for services -------------------- 3.88s 2026-03-02 01:07:41.776981 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.88s 2026-03-02 01:07:41.776985 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.62s 2026-03-02 01:07:41.776989 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.61s 2026-03-02 01:07:41.776992 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.59s 2026-03-02 01:07:41.776996 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.54s 2026-03-02 01:07:41.777000 | orchestrator | glance : Check glance containers ---------------------------------------- 3.46s 2026-03-02 01:07:41.777004 | orchestrator | 2026-03-02 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:44.807066 | orchestrator | 2026-03-02 01:07:44 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:07:44.810623 | orchestrator | 2026-03-02 01:07:44 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:44.812900 | orchestrator | 2026-03-02 01:07:44 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:44.815313 | orchestrator | 2026-03-02 01:07:44 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:44.815439 | orchestrator | 2026-03-02 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:47.860485 | orchestrator | 2026-03-02 01:07:47 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:07:47.861758 | orchestrator | 2026-03-02 01:07:47 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:47.863311 | orchestrator | 2026-03-02 01:07:47 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:47.864922 | orchestrator | 2026-03-02 01:07:47 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:47.865012 | orchestrator | 2026-03-02 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:50.915148 | orchestrator | 2026-03-02 01:07:50 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:07:50.916519 | orchestrator | 2026-03-02 01:07:50 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:50.917641 | orchestrator | 2026-03-02 01:07:50 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state STARTED 2026-03-02 01:07:50.919058 | orchestrator | 2026-03-02 01:07:50 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:50.919100 | orchestrator | 2026-03-02 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:53.961298 | orchestrator | 2026-03-02 01:07:53 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:07:53.963445 | orchestrator | 2026-03-02 01:07:53 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:53.968133 | orchestrator | 2026-03-02 01:07:53 | INFO  | Task bc23a46d-9f45-4fa4-90d9-a24759cb7fb9 is in state SUCCESS 2026-03-02 01:07:53.968226 | orchestrator | 2026-03-02 01:07:53.970150 | orchestrator | 2026-03-02 01:07:53.970192 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:07:53.970226 | orchestrator | 2026-03-02 01:07:53.970233 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:07:53.970238 | orchestrator | Monday 02 March 2026 01:05:08 +0000 (0:00:00.244) 0:00:00.244 ********** 2026-03-02 01:07:53.970243 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:07:53.970248 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:07:53.970252 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:07:53.970257 | orchestrator | 2026-03-02 01:07:53.970261 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:07:53.970266 | orchestrator | Monday 02 March 2026 01:05:08 +0000 (0:00:00.286) 0:00:00.530 ********** 2026-03-02 01:07:53.970271 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-02 01:07:53.970276 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-02 01:07:53.970281 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-02 01:07:53.970285 | orchestrator | 2026-03-02 01:07:53.970324 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-02 01:07:53.970333 | orchestrator | 2026-03-02 01:07:53.970339 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-02 01:07:53.970346 | orchestrator | Monday 02 March 2026 01:05:09 +0000 (0:00:00.403) 0:00:00.934 ********** 2026-03-02 01:07:53.970352 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:07:53.970441 | orchestrator | 2026-03-02 01:07:53.970452 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-02 01:07:53.970456 | orchestrator | Monday 02 March 2026 01:05:09 +0000 (0:00:00.532) 0:00:01.467 ********** 2026-03-02 01:07:53.970461 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-02 01:07:53.970465 | orchestrator | 2026-03-02 01:07:53.970468 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-02 01:07:53.970472 | orchestrator | Monday 02 March 2026 01:05:13 +0000 (0:00:03.445) 0:00:04.912 ********** 2026-03-02 01:07:53.970477 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-02 01:07:53.970482 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-02 01:07:53.970489 | orchestrator | 2026-03-02 01:07:53.970498 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-02 01:07:53.970504 | orchestrator | Monday 02 March 2026 01:05:20 +0000 (0:00:07.146) 0:00:12.059 ********** 2026-03-02 01:07:53.970511 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-02 01:07:53.970517 | orchestrator | 2026-03-02 01:07:53.970523 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-02 01:07:53.970529 | orchestrator | Monday 02 March 2026 01:05:23 +0000 (0:00:03.325) 0:00:15.384 ********** 2026-03-02 01:07:53.970544 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-02 01:07:53.970551 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-02 01:07:53.970557 | orchestrator | 2026-03-02 01:07:53.970563 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-02 01:07:53.970570 | orchestrator | Monday 02 March 2026 01:05:27 +0000 (0:00:04.357) 0:00:19.742 ********** 2026-03-02 01:07:53.970735 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-02 01:07:53.970742 | orchestrator | 2026-03-02 01:07:53.970745 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-02 01:07:53.970749 | orchestrator | Monday 02 March 2026 01:05:31 +0000 (0:00:03.686) 0:00:23.428 ********** 2026-03-02 01:07:53.970753 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-02 01:07:53.970757 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-02 01:07:53.970761 | orchestrator | 2026-03-02 01:07:53.970764 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-02 01:07:53.970768 | orchestrator | Monday 02 March 2026 01:05:40 +0000 (0:00:08.475) 0:00:31.904 ********** 2026-03-02 01:07:53.970774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.970787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.970791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.970796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.970806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.970811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.970815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.970824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.970828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.970832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.970841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.970845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.970849 | orchestrator | 2026-03-02 01:07:53.970853 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-02 01:07:53.970857 | orchestrator | Monday 02 March 2026 01:05:42 +0000 (0:00:02.486) 0:00:34.390 ********** 2026-03-02 01:07:53.970861 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:53.970865 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:53.970869 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:53.970873 | orchestrator | 2026-03-02 01:07:53.970877 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-02 01:07:53.970880 | orchestrator | Monday 02 March 2026 01:05:42 +0000 (0:00:00.244) 0:00:34.635 ********** 2026-03-02 01:07:53.970884 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:07:53.971026 | orchestrator | 2026-03-02 01:07:53.971031 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-02 01:07:53.971035 | orchestrator | Monday 02 March 2026 01:05:43 +0000 (0:00:00.562) 0:00:35.198 ********** 2026-03-02 01:07:53.971050 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-02 01:07:53.971055 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-02 01:07:53.971059 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-02 01:07:53.971062 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-02 01:07:53.971066 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-02 01:07:53.971070 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-02 01:07:53.971074 | orchestrator | 2026-03-02 01:07:53.971077 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-02 01:07:53.971081 | orchestrator | Monday 02 March 2026 01:05:44 +0000 (0:00:01.514) 0:00:36.712 ********** 2026-03-02 01:07:53.971086 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-02 01:07:53.971095 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-02 01:07:53.971102 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-02 01:07:53.971106 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-02 01:07:53.971120 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-02 01:07:53.971125 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-02 01:07:53.971133 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-02 01:07:53.971140 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-02 01:07:53.971144 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-02 01:07:53.971157 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-02 01:07:53.971162 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-02 01:07:53.971171 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-02 01:07:53.971181 | orchestrator | 2026-03-02 01:07:53.971189 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-02 01:07:53.971196 | orchestrator | Monday 02 March 2026 01:05:48 +0000 (0:00:03.892) 0:00:40.604 ********** 2026-03-02 01:07:53.971202 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-02 01:07:53.971208 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-02 01:07:53.971217 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-02 01:07:53.971223 | orchestrator | 2026-03-02 01:07:53.971228 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-02 01:07:53.971234 | orchestrator | Monday 02 March 2026 01:05:50 +0000 (0:00:01.697) 0:00:42.302 ********** 2026-03-02 01:07:53.971240 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-02 01:07:53.971252 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-02 01:07:53.971258 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-02 01:07:53.971263 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-02 01:07:53.971268 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-02 01:07:53.971274 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-02 01:07:53.971280 | orchestrator | 2026-03-02 01:07:53.971286 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-02 01:07:53.971291 | orchestrator | Monday 02 March 2026 01:05:53 +0000 (0:00:02.795) 0:00:45.097 ********** 2026-03-02 01:07:53.971297 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-02 01:07:53.971302 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-02 01:07:53.971307 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-02 01:07:53.971313 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-02 01:07:53.971319 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-02 01:07:53.971325 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-02 01:07:53.971331 | orchestrator | 2026-03-02 01:07:53.971336 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-02 01:07:53.971342 | orchestrator | Monday 02 March 2026 01:05:54 +0000 (0:00:01.161) 0:00:46.259 ********** 2026-03-02 01:07:53.971347 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:53.971353 | orchestrator | 2026-03-02 01:07:53.971386 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-02 01:07:53.971392 | orchestrator | Monday 02 March 2026 01:05:54 +0000 (0:00:00.106) 0:00:46.366 ********** 2026-03-02 01:07:53.971398 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:53.971403 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:53.971409 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:53.971414 | orchestrator | 2026-03-02 01:07:53.971420 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-02 01:07:53.971431 | orchestrator | Monday 02 March 2026 01:05:54 +0000 (0:00:00.296) 0:00:46.663 ********** 2026-03-02 01:07:53.971437 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:07:53.971443 | orchestrator | 2026-03-02 01:07:53.971449 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-02 01:07:53.971477 | orchestrator | Monday 02 March 2026 01:05:55 +0000 (0:00:00.850) 0:00:47.514 ********** 2026-03-02 01:07:53.971484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.971492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.971502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.971510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971585 | orchestrator | 2026-03-02 01:07:53.971589 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-02 01:07:53.971593 | orchestrator | Monday 02 March 2026 01:06:00 +0000 (0:00:04.411) 0:00:51.926 ********** 2026-03-02 01:07:53.971597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 01:07:53.971604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971634 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:53.971645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 01:07:53.971653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 01:07:53.971668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971725 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:53.971731 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:53.971738 | orchestrator | 2026-03-02 01:07:53.971745 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-02 01:07:53.971751 | orchestrator | Monday 02 March 2026 01:06:01 +0000 (0:00:01.214) 0:00:53.140 ********** 2026-03-02 01:07:53.971759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 01:07:53.971766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971814 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:53.971822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 01:07:53.971829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971856 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:53.971864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 01:07:53.971875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.971895 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:53.971901 | orchestrator | 2026-03-02 01:07:53.971906 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-02 01:07:53.971913 | orchestrator | Monday 02 March 2026 01:06:02 +0000 (0:00:01.400) 0:00:54.540 ********** 2026-03-02 01:07:53.971925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.971931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.971939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.971943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.971997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972001 | orchestrator | 2026-03-02 01:07:53.972006 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-02 01:07:53.972011 | orchestrator | Monday 02 March 2026 01:06:06 +0000 (0:00:04.234) 0:00:58.775 ********** 2026-03-02 01:07:53.972016 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-02 01:07:53.972021 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-02 01:07:53.972024 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-02 01:07:53.972028 | orchestrator | 2026-03-02 01:07:53.972032 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-02 01:07:53.972036 | orchestrator | Monday 02 March 2026 01:06:08 +0000 (0:00:01.504) 0:01:00.279 ********** 2026-03-02 01:07:53.972042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.972047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.972051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.972060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972104 | orchestrator | 2026-03-02 01:07:53.972108 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-02 01:07:53.972112 | orchestrator | Monday 02 March 2026 01:06:21 +0000 (0:00:13.465) 0:01:13.745 ********** 2026-03-02 01:07:53.972116 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:53.972120 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:07:53.972123 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:07:53.972127 | orchestrator | 2026-03-02 01:07:53.972131 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-02 01:07:53.972137 | orchestrator | Monday 02 March 2026 01:06:24 +0000 (0:00:02.381) 0:01:16.127 ********** 2026-03-02 01:07:53.972141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 01:07:53.972148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.972154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.972159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.972163 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:53.972167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 01:07:53.972175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.972179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.972185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.972189 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:53.972195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-02 01:07:53.972199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.972203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.972210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-02 01:07:53.972217 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:53.972221 | orchestrator | 2026-03-02 01:07:53.972225 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-02 01:07:53.972229 | orchestrator | Monday 02 March 2026 01:06:24 +0000 (0:00:00.640) 0:01:16.768 ********** 2026-03-02 01:07:53.972232 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:53.972236 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:53.972240 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:53.972244 | orchestrator | 2026-03-02 01:07:53.972248 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-02 01:07:53.972252 | orchestrator | Monday 02 March 2026 01:06:25 +0000 (0:00:00.276) 0:01:17.044 ********** 2026-03-02 01:07:53.972256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.972262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.972266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-02 01:07:53.972273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-02 01:07:53.972321 | orchestrator | 2026-03-02 01:07:53.972325 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-02 01:07:53.972329 | orchestrator | Monday 02 March 2026 01:06:28 +0000 (0:00:02.968) 0:01:20.012 ********** 2026-03-02 01:07:53.972333 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:53.972336 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:07:53.972340 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:07:53.972344 | orchestrator | 2026-03-02 01:07:53.972348 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-02 01:07:53.972352 | orchestrator | Monday 02 March 2026 01:06:28 +0000 (0:00:00.373) 0:01:20.385 ********** 2026-03-02 01:07:53.972356 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:53.972375 | orchestrator | 2026-03-02 01:07:53.972383 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-02 01:07:53.972394 | orchestrator | Monday 02 March 2026 01:06:30 +0000 (0:00:01.895) 0:01:22.281 ********** 2026-03-02 01:07:53.972405 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:53.972411 | orchestrator | 2026-03-02 01:07:53.972418 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-02 01:07:53.972424 | orchestrator | Monday 02 March 2026 01:06:32 +0000 (0:00:01.997) 0:01:24.278 ********** 2026-03-02 01:07:53.972431 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:53.972437 | orchestrator | 2026-03-02 01:07:53.972443 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-02 01:07:53.972450 | orchestrator | Monday 02 March 2026 01:06:50 +0000 (0:00:18.211) 0:01:42.489 ********** 2026-03-02 01:07:53.972456 | orchestrator | 2026-03-02 01:07:53.972463 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-02 01:07:53.972468 | orchestrator | Monday 02 March 2026 01:06:50 +0000 (0:00:00.063) 0:01:42.553 ********** 2026-03-02 01:07:53.972472 | orchestrator | 2026-03-02 01:07:53.972476 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-02 01:07:53.972480 | orchestrator | Monday 02 March 2026 01:06:50 +0000 (0:00:00.059) 0:01:42.612 ********** 2026-03-02 01:07:53.972484 | orchestrator | 2026-03-02 01:07:53.972487 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-02 01:07:53.972491 | orchestrator | Monday 02 March 2026 01:06:50 +0000 (0:00:00.072) 0:01:42.685 ********** 2026-03-02 01:07:53.972495 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:53.972499 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:07:53.972506 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:07:53.972510 | orchestrator | 2026-03-02 01:07:53.972514 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-02 01:07:53.972518 | orchestrator | Monday 02 March 2026 01:07:08 +0000 (0:00:17.470) 0:02:00.155 ********** 2026-03-02 01:07:53.972521 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:53.972525 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:07:53.972529 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:07:53.972533 | orchestrator | 2026-03-02 01:07:53.972537 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-02 01:07:53.972541 | orchestrator | Monday 02 March 2026 01:07:17 +0000 (0:00:09.598) 0:02:09.753 ********** 2026-03-02 01:07:53.972544 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:53.972548 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:07:53.972552 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:07:53.972556 | orchestrator | 2026-03-02 01:07:53.972559 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-02 01:07:53.972563 | orchestrator | Monday 02 March 2026 01:07:41 +0000 (0:00:23.099) 0:02:32.853 ********** 2026-03-02 01:07:53.972568 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:07:53.972574 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:07:53.972582 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:07:53.972591 | orchestrator | 2026-03-02 01:07:53.972597 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-02 01:07:53.972608 | orchestrator | Monday 02 March 2026 01:07:51 +0000 (0:00:10.251) 0:02:43.104 ********** 2026-03-02 01:07:53.972614 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:07:53.972620 | orchestrator | 2026-03-02 01:07:53.972626 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:07:53.972632 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-02 01:07:53.972639 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 01:07:53.972646 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 01:07:53.972652 | orchestrator | 2026-03-02 01:07:53.972659 | orchestrator | 2026-03-02 01:07:53.972665 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:07:53.972672 | orchestrator | Monday 02 March 2026 01:07:51 +0000 (0:00:00.257) 0:02:43.361 ********** 2026-03-02 01:07:53.972676 | orchestrator | =============================================================================== 2026-03-02 01:07:53.972680 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.10s 2026-03-02 01:07:53.972684 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.21s 2026-03-02 01:07:53.972688 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 17.47s 2026-03-02 01:07:53.972692 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.47s 2026-03-02 01:07:53.972696 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.25s 2026-03-02 01:07:53.972699 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.60s 2026-03-02 01:07:53.972703 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.48s 2026-03-02 01:07:53.972707 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.15s 2026-03-02 01:07:53.972711 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.41s 2026-03-02 01:07:53.972715 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.36s 2026-03-02 01:07:53.972718 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.23s 2026-03-02 01:07:53.972722 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.89s 2026-03-02 01:07:53.972730 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.69s 2026-03-02 01:07:53.972734 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.45s 2026-03-02 01:07:53.972738 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.33s 2026-03-02 01:07:53.972745 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.97s 2026-03-02 01:07:53.972749 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.80s 2026-03-02 01:07:53.972753 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.49s 2026-03-02 01:07:53.972756 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.38s 2026-03-02 01:07:53.972760 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.00s 2026-03-02 01:07:53.974754 | orchestrator | 2026-03-02 01:07:53 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:53.974778 | orchestrator | 2026-03-02 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:07:57.030755 | orchestrator | 2026-03-02 01:07:57 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:07:57.030807 | orchestrator | 2026-03-02 01:07:57 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:07:57.030811 | orchestrator | 2026-03-02 01:07:57 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:07:57.030815 | orchestrator | 2026-03-02 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:00.076249 | orchestrator | 2026-03-02 01:08:00 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:00.076293 | orchestrator | 2026-03-02 01:08:00 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:00.076755 | orchestrator | 2026-03-02 01:08:00 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:00.076769 | orchestrator | 2026-03-02 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:03.112876 | orchestrator | 2026-03-02 01:08:03 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:03.113998 | orchestrator | 2026-03-02 01:08:03 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:03.114969 | orchestrator | 2026-03-02 01:08:03 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:03.115056 | orchestrator | 2026-03-02 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:06.149482 | orchestrator | 2026-03-02 01:08:06 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:06.150607 | orchestrator | 2026-03-02 01:08:06 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:06.152595 | orchestrator | 2026-03-02 01:08:06 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:06.152637 | orchestrator | 2026-03-02 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:09.190579 | orchestrator | 2026-03-02 01:08:09 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:09.191811 | orchestrator | 2026-03-02 01:08:09 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:09.193621 | orchestrator | 2026-03-02 01:08:09 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:09.193664 | orchestrator | 2026-03-02 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:12.232430 | orchestrator | 2026-03-02 01:08:12 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:12.233416 | orchestrator | 2026-03-02 01:08:12 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:12.234168 | orchestrator | 2026-03-02 01:08:12 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:12.234255 | orchestrator | 2026-03-02 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:15.283149 | orchestrator | 2026-03-02 01:08:15 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:15.285053 | orchestrator | 2026-03-02 01:08:15 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:15.288630 | orchestrator | 2026-03-02 01:08:15 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:15.288686 | orchestrator | 2026-03-02 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:18.320927 | orchestrator | 2026-03-02 01:08:18 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:18.322794 | orchestrator | 2026-03-02 01:08:18 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:18.324642 | orchestrator | 2026-03-02 01:08:18 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:18.324730 | orchestrator | 2026-03-02 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:21.363640 | orchestrator | 2026-03-02 01:08:21 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:21.365532 | orchestrator | 2026-03-02 01:08:21 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:21.365632 | orchestrator | 2026-03-02 01:08:21 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:21.365940 | orchestrator | 2026-03-02 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:24.410541 | orchestrator | 2026-03-02 01:08:24 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:24.411745 | orchestrator | 2026-03-02 01:08:24 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:24.416244 | orchestrator | 2026-03-02 01:08:24 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:24.416433 | orchestrator | 2026-03-02 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:27.453075 | orchestrator | 2026-03-02 01:08:27 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:27.454591 | orchestrator | 2026-03-02 01:08:27 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:27.456709 | orchestrator | 2026-03-02 01:08:27 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:27.456765 | orchestrator | 2026-03-02 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:30.486975 | orchestrator | 2026-03-02 01:08:30 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:30.487426 | orchestrator | 2026-03-02 01:08:30 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:30.489230 | orchestrator | 2026-03-02 01:08:30 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:30.489258 | orchestrator | 2026-03-02 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:33.524584 | orchestrator | 2026-03-02 01:08:33 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:33.525353 | orchestrator | 2026-03-02 01:08:33 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:33.527541 | orchestrator | 2026-03-02 01:08:33 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:33.527580 | orchestrator | 2026-03-02 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:36.580995 | orchestrator | 2026-03-02 01:08:36 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:36.583039 | orchestrator | 2026-03-02 01:08:36 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:36.584733 | orchestrator | 2026-03-02 01:08:36 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:36.584788 | orchestrator | 2026-03-02 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:39.634277 | orchestrator | 2026-03-02 01:08:39 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:39.636183 | orchestrator | 2026-03-02 01:08:39 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:39.638133 | orchestrator | 2026-03-02 01:08:39 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:39.638169 | orchestrator | 2026-03-02 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:42.680234 | orchestrator | 2026-03-02 01:08:42 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:42.681640 | orchestrator | 2026-03-02 01:08:42 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:42.683105 | orchestrator | 2026-03-02 01:08:42 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:42.683145 | orchestrator | 2026-03-02 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:45.728619 | orchestrator | 2026-03-02 01:08:45 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:45.730940 | orchestrator | 2026-03-02 01:08:45 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:45.733617 | orchestrator | 2026-03-02 01:08:45 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:45.733726 | orchestrator | 2026-03-02 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:48.778144 | orchestrator | 2026-03-02 01:08:48 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:48.780018 | orchestrator | 2026-03-02 01:08:48 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:48.782220 | orchestrator | 2026-03-02 01:08:48 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:48.782330 | orchestrator | 2026-03-02 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:51.823034 | orchestrator | 2026-03-02 01:08:51 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:51.824724 | orchestrator | 2026-03-02 01:08:51 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:51.826399 | orchestrator | 2026-03-02 01:08:51 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:51.826523 | orchestrator | 2026-03-02 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:54.869721 | orchestrator | 2026-03-02 01:08:54 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:54.870852 | orchestrator | 2026-03-02 01:08:54 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:54.872823 | orchestrator | 2026-03-02 01:08:54 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:54.873486 | orchestrator | 2026-03-02 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:08:57.915796 | orchestrator | 2026-03-02 01:08:57 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:08:57.917719 | orchestrator | 2026-03-02 01:08:57 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:08:57.919858 | orchestrator | 2026-03-02 01:08:57 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:08:57.919907 | orchestrator | 2026-03-02 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:00.950114 | orchestrator | 2026-03-02 01:09:00 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:09:00.950601 | orchestrator | 2026-03-02 01:09:00 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:00.952460 | orchestrator | 2026-03-02 01:09:00 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:00.952502 | orchestrator | 2026-03-02 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:03.998196 | orchestrator | 2026-03-02 01:09:03 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:09:04.000060 | orchestrator | 2026-03-02 01:09:04 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:04.002316 | orchestrator | 2026-03-02 01:09:04 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:04.002548 | orchestrator | 2026-03-02 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:07.060088 | orchestrator | 2026-03-02 01:09:07 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:09:07.063801 | orchestrator | 2026-03-02 01:09:07 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:07.067953 | orchestrator | 2026-03-02 01:09:07 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:07.068711 | orchestrator | 2026-03-02 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:10.114955 | orchestrator | 2026-03-02 01:09:10 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:09:10.119274 | orchestrator | 2026-03-02 01:09:10 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:10.121139 | orchestrator | 2026-03-02 01:09:10 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:10.121184 | orchestrator | 2026-03-02 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:13.156517 | orchestrator | 2026-03-02 01:09:13 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:09:13.159020 | orchestrator | 2026-03-02 01:09:13 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:13.160636 | orchestrator | 2026-03-02 01:09:13 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:13.160749 | orchestrator | 2026-03-02 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:16.210803 | orchestrator | 2026-03-02 01:09:16 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:09:16.212032 | orchestrator | 2026-03-02 01:09:16 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:16.213948 | orchestrator | 2026-03-02 01:09:16 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:16.214003 | orchestrator | 2026-03-02 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:19.264507 | orchestrator | 2026-03-02 01:09:19 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:09:19.264560 | orchestrator | 2026-03-02 01:09:19 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:19.267728 | orchestrator | 2026-03-02 01:09:19 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:19.267901 | orchestrator | 2026-03-02 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:22.324926 | orchestrator | 2026-03-02 01:09:22 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:09:22.325583 | orchestrator | 2026-03-02 01:09:22 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:22.327684 | orchestrator | 2026-03-02 01:09:22 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:22.327726 | orchestrator | 2026-03-02 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:25.375825 | orchestrator | 2026-03-02 01:09:25 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:09:25.376707 | orchestrator | 2026-03-02 01:09:25 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:25.378217 | orchestrator | 2026-03-02 01:09:25 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:25.378251 | orchestrator | 2026-03-02 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:28.428525 | orchestrator | 2026-03-02 01:09:28 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state STARTED 2026-03-02 01:09:28.431387 | orchestrator | 2026-03-02 01:09:28 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:28.433506 | orchestrator | 2026-03-02 01:09:28 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:28.433958 | orchestrator | 2026-03-02 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:31.476591 | orchestrator | 2026-03-02 01:09:31 | INFO  | Task c10c5ce3-5a91-4f72-8034-a26b162e1c1e is in state SUCCESS 2026-03-02 01:09:31.477827 | orchestrator | 2026-03-02 01:09:31.477864 | orchestrator | 2026-03-02 01:09:31.477871 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:09:31.477877 | orchestrator | 2026-03-02 01:09:31.477883 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:09:31.477888 | orchestrator | Monday 02 March 2026 01:07:43 +0000 (0:00:00.205) 0:00:00.205 ********** 2026-03-02 01:09:31.477894 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:09:31.477899 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:09:31.477904 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:09:31.477909 | orchestrator | 2026-03-02 01:09:31.477914 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:09:31.477919 | orchestrator | Monday 02 March 2026 01:07:44 +0000 (0:00:00.262) 0:00:00.467 ********** 2026-03-02 01:09:31.477924 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-02 01:09:31.477930 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-02 01:09:31.477935 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-02 01:09:31.477940 | orchestrator | 2026-03-02 01:09:31.477945 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-02 01:09:31.477950 | orchestrator | 2026-03-02 01:09:31.477955 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-02 01:09:31.477973 | orchestrator | Monday 02 March 2026 01:07:44 +0000 (0:00:00.367) 0:00:00.835 ********** 2026-03-02 01:09:31.477978 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:09:31.477984 | orchestrator | 2026-03-02 01:09:31.477989 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-02 01:09:31.477994 | orchestrator | Monday 02 March 2026 01:07:44 +0000 (0:00:00.468) 0:00:01.303 ********** 2026-03-02 01:09:31.478000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478040 | orchestrator | 2026-03-02 01:09:31.478045 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-02 01:09:31.478050 | orchestrator | Monday 02 March 2026 01:07:45 +0000 (0:00:00.724) 0:00:02.028 ********** 2026-03-02 01:09:31.478055 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-02 01:09:31.478061 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-02 01:09:31.478116 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 01:09:31.478123 | orchestrator | 2026-03-02 01:09:31.478128 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-02 01:09:31.478134 | orchestrator | Monday 02 March 2026 01:07:46 +0000 (0:00:00.780) 0:00:02.809 ********** 2026-03-02 01:09:31.478149 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:09:31.478154 | orchestrator | 2026-03-02 01:09:31.478159 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-02 01:09:31.478164 | orchestrator | Monday 02 March 2026 01:07:46 +0000 (0:00:00.607) 0:00:03.416 ********** 2026-03-02 01:09:31.478213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478236 | orchestrator | 2026-03-02 01:09:31.478241 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-02 01:09:31.478246 | orchestrator | Monday 02 March 2026 01:07:48 +0000 (0:00:01.328) 0:00:04.744 ********** 2026-03-02 01:09:31.478251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-02 01:09:31.478257 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:09:31.478262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-02 01:09:31.478267 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:09:31.478276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-02 01:09:31.478284 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:09:31.478290 | orchestrator | 2026-03-02 01:09:31.478319 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-02 01:09:31.478325 | orchestrator | Monday 02 March 2026 01:07:48 +0000 (0:00:00.307) 0:00:05.052 ********** 2026-03-02 01:09:31.478330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-02 01:09:31.478336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-02 01:09:31.478341 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:09:31.478346 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:09:31.478352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-02 01:09:31.478357 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:09:31.478362 | orchestrator | 2026-03-02 01:09:31.478367 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-02 01:09:31.478372 | orchestrator | Monday 02 March 2026 01:07:49 +0000 (0:00:00.641) 0:00:05.694 ********** 2026-03-02 01:09:31.478377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478519 | orchestrator | 2026-03-02 01:09:31.478524 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-02 01:09:31.478529 | orchestrator | Monday 02 March 2026 01:07:50 +0000 (0:00:01.131) 0:00:06.826 ********** 2026-03-02 01:09:31.478534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.478550 | orchestrator | 2026-03-02 01:09:31.478591 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-02 01:09:31.478597 | orchestrator | Monday 02 March 2026 01:07:51 +0000 (0:00:01.217) 0:00:08.044 ********** 2026-03-02 01:09:31.478602 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:09:31.478607 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:09:31.478612 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:09:31.478617 | orchestrator | 2026-03-02 01:09:31.478622 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-02 01:09:31.478627 | orchestrator | Monday 02 March 2026 01:07:52 +0000 (0:00:00.478) 0:00:08.523 ********** 2026-03-02 01:09:31.478638 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-02 01:09:31.478643 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-02 01:09:31.478648 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-02 01:09:31.478654 | orchestrator | 2026-03-02 01:09:31.478658 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-02 01:09:31.478663 | orchestrator | Monday 02 March 2026 01:07:53 +0000 (0:00:01.195) 0:00:09.718 ********** 2026-03-02 01:09:31.478669 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-02 01:09:31.478674 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-02 01:09:31.478679 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-02 01:09:31.478684 | orchestrator | 2026-03-02 01:09:31.478689 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-02 01:09:31.478694 | orchestrator | Monday 02 March 2026 01:07:54 +0000 (0:00:01.180) 0:00:10.898 ********** 2026-03-02 01:09:31.478703 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 01:09:31.478708 | orchestrator | 2026-03-02 01:09:31.478713 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-02 01:09:31.478718 | orchestrator | Monday 02 March 2026 01:07:55 +0000 (0:00:00.805) 0:00:11.703 ********** 2026-03-02 01:09:31.478723 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-02 01:09:31.478751 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-02 01:09:31.478756 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:09:31.478949 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:09:31.478961 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:09:31.478965 | orchestrator | 2026-03-02 01:09:31.478968 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-02 01:09:31.478971 | orchestrator | Monday 02 March 2026 01:07:55 +0000 (0:00:00.685) 0:00:12.388 ********** 2026-03-02 01:09:31.478975 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:09:31.478978 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:09:31.478981 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:09:31.478984 | orchestrator | 2026-03-02 01:09:31.478987 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-02 01:09:31.478990 | orchestrator | Monday 02 March 2026 01:07:56 +0000 (0:00:00.506) 0:00:12.895 ********** 2026-03-02 01:09:31.478994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1112956, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7209764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.478999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1112956, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7209764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1112956, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7209764, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1112973, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.728592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1112973, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.728592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1112973, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.728592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1113001, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7420182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1113001, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7420182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1113001, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7420182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1112967, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7259429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1112967, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7259429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1112967, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7259429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1113005, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.743592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1113005, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.743592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1113005, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.743592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1112960, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.72273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1112960, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.72273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1112960, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.72273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1112985, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7345078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1112985, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7345078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1112985, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7345078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1112994, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7398987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1112994, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7398987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1112994, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7398987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1112951, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7205083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1112951, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7205083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1112951, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7205083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1112959, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7220504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1112959, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7220504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1112959, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7220504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1112972, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7269173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1112972, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7269173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1112972, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7269173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1112990, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.735592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1112990, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.735592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1112990, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.735592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1112999, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7412288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1112999, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7412288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1112999, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7412288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1112964, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7257607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1112964, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7257607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1112964, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7257607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1112992, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.738592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1112992, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.738592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1112992, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.738592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1113008, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7445922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1113008, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7445922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1113008, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7445922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1112988, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.734592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1112988, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.734592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1112988, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.734592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1112982, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7325919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1112982, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7325919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1112982, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7325919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1112980, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7305918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1112980, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7305918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1112980, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7305918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1112991, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.737592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1112991, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.737592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1112991, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.737592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1112979, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7295918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1112979, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7295918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1112979, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7295918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1112997, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7409384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1112997, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7409384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1112997, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7409384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1112961, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7246656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1112961, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7246656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1112961, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7246656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1113150, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.777424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1113150, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.777424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1113150, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.777424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1113052, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7599018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1113052, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7599018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1113052, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7599018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1113026, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7491086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1113026, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7491086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1113026, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7491086, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1113092, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7635195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1113092, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7635195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1113092, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7635195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1113013, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7463923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1113013, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7463923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1113013, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7463923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1113116, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7711105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1113116, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7711105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1113116, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7711105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1113094, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7684157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1113094, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7684157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1113094, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7684157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1113121, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7711105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1113121, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7711105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1113121, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7711105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1113144, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.776348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1113144, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.776348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1113144, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.776348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1113115, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.769909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1113115, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.769909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1113115, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.769909, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1113081, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.760743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1113081, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.760743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1113081, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.760743, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1113039, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7538564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1113039, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7538564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1113039, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7538564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1113077, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7604356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1113077, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7604356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1113077, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7604356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1113031, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7506409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1113031, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7506409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1113031, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7506409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1113088, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7630427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1113088, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7630427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1113088, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7630427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1113133, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.775635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1113133, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.775635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1113133, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.775635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1113127, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.772763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1113127, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.772763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1113127, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.772763, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1113016, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7472756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1113016, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7472756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1113016, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7472756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1113022, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7479422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1113022, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7479422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1113022, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7479422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1113109, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7698078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1113109, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7698078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1113109, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7698078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1113124, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7718265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1113124, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7718265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1113124, 'dev': 114, 'nlink': 1, 'atime': 1772409747.0, 'mtime': 1772409747.0, 'ctime': 1772410690.7718265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-02 01:09:31.479873 | orchestrator | 2026-03-02 01:09:31.479878 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-02 01:09:31.479884 | orchestrator | Monday 02 March 2026 01:08:30 +0000 (0:00:34.132) 0:00:47.027 ********** 2026-03-02 01:09:31.479891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.479896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.479901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-02 01:09:31.479910 | orchestrator | 2026-03-02 01:09:31.479915 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-02 01:09:31.479920 | orchestrator | Monday 02 March 2026 01:08:31 +0000 (0:00:00.868) 0:00:47.896 ********** 2026-03-02 01:09:31.479925 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:09:31.479930 | orchestrator | 2026-03-02 01:09:31.479935 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-02 01:09:31.479940 | orchestrator | Monday 02 March 2026 01:08:33 +0000 (0:00:01.894) 0:00:49.791 ********** 2026-03-02 01:09:31.479944 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:09:31.479949 | orchestrator | 2026-03-02 01:09:31.479954 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-02 01:09:31.479958 | orchestrator | Monday 02 March 2026 01:08:35 +0000 (0:00:02.061) 0:00:51.853 ********** 2026-03-02 01:09:31.479963 | orchestrator | 2026-03-02 01:09:31.479968 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-02 01:09:31.479972 | orchestrator | Monday 02 March 2026 01:08:35 +0000 (0:00:00.075) 0:00:51.928 ********** 2026-03-02 01:09:31.479977 | orchestrator | 2026-03-02 01:09:31.479981 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-02 01:09:31.479986 | orchestrator | Monday 02 March 2026 01:08:35 +0000 (0:00:00.255) 0:00:52.184 ********** 2026-03-02 01:09:31.479991 | orchestrator | 2026-03-02 01:09:31.479996 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-02 01:09:31.480000 | orchestrator | Monday 02 March 2026 01:08:35 +0000 (0:00:00.068) 0:00:52.252 ********** 2026-03-02 01:09:31.480005 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:09:31.480009 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:09:31.480017 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:09:31.480022 | orchestrator | 2026-03-02 01:09:31.480027 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-02 01:09:31.480032 | orchestrator | Monday 02 March 2026 01:08:37 +0000 (0:00:01.643) 0:00:53.896 ********** 2026-03-02 01:09:31.480036 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:09:31.480041 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:09:31.480046 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-02 01:09:31.480051 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-02 01:09:31.480056 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:09:31.480060 | orchestrator | 2026-03-02 01:09:31.480065 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-02 01:09:31.480070 | orchestrator | Monday 02 March 2026 01:09:03 +0000 (0:00:26.112) 0:01:20.009 ********** 2026-03-02 01:09:31.480074 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:09:31.480079 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:09:31.480084 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:09:31.480088 | orchestrator | 2026-03-02 01:09:31.480094 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-02 01:09:31.480098 | orchestrator | Monday 02 March 2026 01:09:23 +0000 (0:00:19.897) 0:01:39.907 ********** 2026-03-02 01:09:31.480103 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:09:31.480108 | orchestrator | 2026-03-02 01:09:31.480113 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-02 01:09:31.480122 | orchestrator | Monday 02 March 2026 01:09:25 +0000 (0:00:02.140) 0:01:42.047 ********** 2026-03-02 01:09:31.480126 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:09:31.480131 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:09:31.480136 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:09:31.480154 | orchestrator | 2026-03-02 01:09:31.480159 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-02 01:09:31.480165 | orchestrator | Monday 02 March 2026 01:09:26 +0000 (0:00:00.498) 0:01:42.545 ********** 2026-03-02 01:09:31.480173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-02 01:09:31.480179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-02 01:09:31.480184 | orchestrator | 2026-03-02 01:09:31.480189 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-02 01:09:31.480194 | orchestrator | Monday 02 March 2026 01:09:28 +0000 (0:00:02.180) 0:01:44.726 ********** 2026-03-02 01:09:31.480198 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:09:31.480203 | orchestrator | 2026-03-02 01:09:31.480208 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:09:31.480213 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 01:09:31.480219 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 01:09:31.480224 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 01:09:31.480229 | orchestrator | 2026-03-02 01:09:31.480233 | orchestrator | 2026-03-02 01:09:31.480238 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:09:31.480243 | orchestrator | Monday 02 March 2026 01:09:28 +0000 (0:00:00.266) 0:01:44.992 ********** 2026-03-02 01:09:31.480248 | orchestrator | =============================================================================== 2026-03-02 01:09:31.480253 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 34.13s 2026-03-02 01:09:31.480258 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.11s 2026-03-02 01:09:31.480262 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 19.90s 2026-03-02 01:09:31.480267 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.18s 2026-03-02 01:09:31.480272 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.14s 2026-03-02 01:09:31.480277 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.06s 2026-03-02 01:09:31.480281 | orchestrator | grafana : Creating grafana database ------------------------------------- 1.89s 2026-03-02 01:09:31.480286 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.64s 2026-03-02 01:09:31.480290 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.33s 2026-03-02 01:09:31.480295 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.22s 2026-03-02 01:09:31.480300 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.20s 2026-03-02 01:09:31.480305 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.18s 2026-03-02 01:09:31.480312 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.13s 2026-03-02 01:09:31.480321 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.87s 2026-03-02 01:09:31.480325 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.80s 2026-03-02 01:09:31.480330 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.78s 2026-03-02 01:09:31.480335 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.72s 2026-03-02 01:09:31.480339 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2026-03-02 01:09:31.480344 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.64s 2026-03-02 01:09:31.480349 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.61s 2026-03-02 01:09:31.480353 | orchestrator | 2026-03-02 01:09:31 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:31.482087 | orchestrator | 2026-03-02 01:09:31 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:31.482187 | orchestrator | 2026-03-02 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:34.524349 | orchestrator | 2026-03-02 01:09:34 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:34.526278 | orchestrator | 2026-03-02 01:09:34 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:34.526325 | orchestrator | 2026-03-02 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:37.563999 | orchestrator | 2026-03-02 01:09:37 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:37.565823 | orchestrator | 2026-03-02 01:09:37 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:37.565875 | orchestrator | 2026-03-02 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:40.610953 | orchestrator | 2026-03-02 01:09:40 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:40.612716 | orchestrator | 2026-03-02 01:09:40 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:40.612770 | orchestrator | 2026-03-02 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:43.702735 | orchestrator | 2026-03-02 01:09:43 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:43.704431 | orchestrator | 2026-03-02 01:09:43 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:43.704483 | orchestrator | 2026-03-02 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:46.749351 | orchestrator | 2026-03-02 01:09:46 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:46.749948 | orchestrator | 2026-03-02 01:09:46 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:46.749968 | orchestrator | 2026-03-02 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:49.778499 | orchestrator | 2026-03-02 01:09:49 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:49.778918 | orchestrator | 2026-03-02 01:09:49 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:49.778943 | orchestrator | 2026-03-02 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:52.817779 | orchestrator | 2026-03-02 01:09:52 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:52.817893 | orchestrator | 2026-03-02 01:09:52 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:52.817903 | orchestrator | 2026-03-02 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:55.859467 | orchestrator | 2026-03-02 01:09:55 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:55.861949 | orchestrator | 2026-03-02 01:09:55 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:55.862003 | orchestrator | 2026-03-02 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:09:58.896132 | orchestrator | 2026-03-02 01:09:58 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:09:58.897562 | orchestrator | 2026-03-02 01:09:58 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:09:58.897608 | orchestrator | 2026-03-02 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:01.928134 | orchestrator | 2026-03-02 01:10:01 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:01.929164 | orchestrator | 2026-03-02 01:10:01 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:10:01.929209 | orchestrator | 2026-03-02 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:04.966833 | orchestrator | 2026-03-02 01:10:04 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:04.968561 | orchestrator | 2026-03-02 01:10:04 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:10:04.968642 | orchestrator | 2026-03-02 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:07.998890 | orchestrator | 2026-03-02 01:10:08 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:07.999257 | orchestrator | 2026-03-02 01:10:08 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:10:07.999468 | orchestrator | 2026-03-02 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:11.042184 | orchestrator | 2026-03-02 01:10:11 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:11.043830 | orchestrator | 2026-03-02 01:10:11 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:10:11.044419 | orchestrator | 2026-03-02 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:14.075559 | orchestrator | 2026-03-02 01:10:14 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:14.076662 | orchestrator | 2026-03-02 01:10:14 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:10:14.076940 | orchestrator | 2026-03-02 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:17.112802 | orchestrator | 2026-03-02 01:10:17 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:17.114637 | orchestrator | 2026-03-02 01:10:17 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:10:17.114681 | orchestrator | 2026-03-02 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:20.161388 | orchestrator | 2026-03-02 01:10:20 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:20.162758 | orchestrator | 2026-03-02 01:10:20 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:10:20.162803 | orchestrator | 2026-03-02 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:23.202564 | orchestrator | 2026-03-02 01:10:23 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:23.204080 | orchestrator | 2026-03-02 01:10:23 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:10:23.204135 | orchestrator | 2026-03-02 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:26.244782 | orchestrator | 2026-03-02 01:10:26 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:26.247579 | orchestrator | 2026-03-02 01:10:26 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:10:26.247634 | orchestrator | 2026-03-02 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:29.294139 | orchestrator | 2026-03-02 01:10:29 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:29.294209 | orchestrator | 2026-03-02 01:10:29 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state STARTED 2026-03-02 01:10:29.294219 | orchestrator | 2026-03-02 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:32.334813 | orchestrator | 2026-03-02 01:10:32 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:32.335279 | orchestrator | 2026-03-02 01:10:32 | INFO  | Task 9c12ba7e-1387-4521-8f9d-10470266b45e is in state SUCCESS 2026-03-02 01:10:32.336880 | orchestrator | 2026-03-02 01:10:32 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:10:32.336942 | orchestrator | 2026-03-02 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:35.376123 | orchestrator | 2026-03-02 01:10:35 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:35.379135 | orchestrator | 2026-03-02 01:10:35 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:10:35.379208 | orchestrator | 2026-03-02 01:10:35 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:38.418443 | orchestrator | 2026-03-02 01:10:38 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:38.421638 | orchestrator | 2026-03-02 01:10:38 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:10:38.421693 | orchestrator | 2026-03-02 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:41.450160 | orchestrator | 2026-03-02 01:10:41 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:41.451257 | orchestrator | 2026-03-02 01:10:41 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:10:41.451305 | orchestrator | 2026-03-02 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:44.481508 | orchestrator | 2026-03-02 01:10:44 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:44.483472 | orchestrator | 2026-03-02 01:10:44 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:10:44.483642 | orchestrator | 2026-03-02 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:47.520754 | orchestrator | 2026-03-02 01:10:47 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:47.521815 | orchestrator | 2026-03-02 01:10:47 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:10:47.521890 | orchestrator | 2026-03-02 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:50.548287 | orchestrator | 2026-03-02 01:10:50 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:50.550397 | orchestrator | 2026-03-02 01:10:50 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:10:50.550448 | orchestrator | 2026-03-02 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:53.588651 | orchestrator | 2026-03-02 01:10:53 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:53.588962 | orchestrator | 2026-03-02 01:10:53 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:10:53.589175 | orchestrator | 2026-03-02 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:56.630357 | orchestrator | 2026-03-02 01:10:56 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:56.633125 | orchestrator | 2026-03-02 01:10:56 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:10:56.633180 | orchestrator | 2026-03-02 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:10:59.670551 | orchestrator | 2026-03-02 01:10:59 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:10:59.672098 | orchestrator | 2026-03-02 01:10:59 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:10:59.672170 | orchestrator | 2026-03-02 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:02.709391 | orchestrator | 2026-03-02 01:11:02 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:02.713907 | orchestrator | 2026-03-02 01:11:02 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:02.713951 | orchestrator | 2026-03-02 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:05.754683 | orchestrator | 2026-03-02 01:11:05 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:05.757384 | orchestrator | 2026-03-02 01:11:05 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:05.757460 | orchestrator | 2026-03-02 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:08.799489 | orchestrator | 2026-03-02 01:11:08 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:08.800600 | orchestrator | 2026-03-02 01:11:08 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:08.800640 | orchestrator | 2026-03-02 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:11.834676 | orchestrator | 2026-03-02 01:11:11 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:11.835491 | orchestrator | 2026-03-02 01:11:11 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:11.835578 | orchestrator | 2026-03-02 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:14.889186 | orchestrator | 2026-03-02 01:11:14 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:14.889243 | orchestrator | 2026-03-02 01:11:14 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:14.889251 | orchestrator | 2026-03-02 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:17.929069 | orchestrator | 2026-03-02 01:11:17 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:17.929123 | orchestrator | 2026-03-02 01:11:17 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:17.929131 | orchestrator | 2026-03-02 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:20.962686 | orchestrator | 2026-03-02 01:11:20 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:20.963707 | orchestrator | 2026-03-02 01:11:20 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:20.963752 | orchestrator | 2026-03-02 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:23.999137 | orchestrator | 2026-03-02 01:11:24 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:24.001878 | orchestrator | 2026-03-02 01:11:24 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:24.001934 | orchestrator | 2026-03-02 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:27.049087 | orchestrator | 2026-03-02 01:11:27 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:27.049809 | orchestrator | 2026-03-02 01:11:27 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:27.049845 | orchestrator | 2026-03-02 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:30.093455 | orchestrator | 2026-03-02 01:11:30 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:30.094930 | orchestrator | 2026-03-02 01:11:30 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:30.095061 | orchestrator | 2026-03-02 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:33.128329 | orchestrator | 2026-03-02 01:11:33 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:33.128856 | orchestrator | 2026-03-02 01:11:33 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:33.129058 | orchestrator | 2026-03-02 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:36.159736 | orchestrator | 2026-03-02 01:11:36 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:36.159971 | orchestrator | 2026-03-02 01:11:36 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:36.159993 | orchestrator | 2026-03-02 01:11:36 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:39.200738 | orchestrator | 2026-03-02 01:11:39 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:39.204503 | orchestrator | 2026-03-02 01:11:39 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:39.204548 | orchestrator | 2026-03-02 01:11:39 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:42.246791 | orchestrator | 2026-03-02 01:11:42 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:42.250961 | orchestrator | 2026-03-02 01:11:42 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:42.251008 | orchestrator | 2026-03-02 01:11:42 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:45.283571 | orchestrator | 2026-03-02 01:11:45 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:45.284635 | orchestrator | 2026-03-02 01:11:45 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:45.284668 | orchestrator | 2026-03-02 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:48.336606 | orchestrator | 2026-03-02 01:11:48 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:48.337991 | orchestrator | 2026-03-02 01:11:48 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:48.338075 | orchestrator | 2026-03-02 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:51.385171 | orchestrator | 2026-03-02 01:11:51 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:51.385997 | orchestrator | 2026-03-02 01:11:51 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:51.386051 | orchestrator | 2026-03-02 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:54.434501 | orchestrator | 2026-03-02 01:11:54 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:54.436049 | orchestrator | 2026-03-02 01:11:54 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:54.436150 | orchestrator | 2026-03-02 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:11:57.480613 | orchestrator | 2026-03-02 01:11:57 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:11:57.482367 | orchestrator | 2026-03-02 01:11:57 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:11:57.482408 | orchestrator | 2026-03-02 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:00.527382 | orchestrator | 2026-03-02 01:12:00 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:00.528139 | orchestrator | 2026-03-02 01:12:00 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:00.528528 | orchestrator | 2026-03-02 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:03.570429 | orchestrator | 2026-03-02 01:12:03 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:03.573373 | orchestrator | 2026-03-02 01:12:03 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:03.573975 | orchestrator | 2026-03-02 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:06.622390 | orchestrator | 2026-03-02 01:12:06 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:06.624358 | orchestrator | 2026-03-02 01:12:06 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:06.624449 | orchestrator | 2026-03-02 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:09.664875 | orchestrator | 2026-03-02 01:12:09 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:09.666931 | orchestrator | 2026-03-02 01:12:09 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:09.666973 | orchestrator | 2026-03-02 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:12.699116 | orchestrator | 2026-03-02 01:12:12 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:12.700989 | orchestrator | 2026-03-02 01:12:12 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:12.701041 | orchestrator | 2026-03-02 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:15.745419 | orchestrator | 2026-03-02 01:12:15 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:15.746673 | orchestrator | 2026-03-02 01:12:15 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:15.747110 | orchestrator | 2026-03-02 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:18.788645 | orchestrator | 2026-03-02 01:12:18 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:18.790204 | orchestrator | 2026-03-02 01:12:18 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:18.790245 | orchestrator | 2026-03-02 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:21.838664 | orchestrator | 2026-03-02 01:12:21 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:21.840640 | orchestrator | 2026-03-02 01:12:21 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:21.840808 | orchestrator | 2026-03-02 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:24.888621 | orchestrator | 2026-03-02 01:12:24 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:24.890476 | orchestrator | 2026-03-02 01:12:24 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:24.891289 | orchestrator | 2026-03-02 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:27.935256 | orchestrator | 2026-03-02 01:12:27 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:27.935787 | orchestrator | 2026-03-02 01:12:27 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:27.935818 | orchestrator | 2026-03-02 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:30.980313 | orchestrator | 2026-03-02 01:12:30 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:30.980364 | orchestrator | 2026-03-02 01:12:30 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:30.980368 | orchestrator | 2026-03-02 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:34.022829 | orchestrator | 2026-03-02 01:12:34 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:34.026432 | orchestrator | 2026-03-02 01:12:34 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:34.026506 | orchestrator | 2026-03-02 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:37.063640 | orchestrator | 2026-03-02 01:12:37 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:37.064007 | orchestrator | 2026-03-02 01:12:37 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:37.064030 | orchestrator | 2026-03-02 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:40.100673 | orchestrator | 2026-03-02 01:12:40 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:40.102648 | orchestrator | 2026-03-02 01:12:40 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:40.102718 | orchestrator | 2026-03-02 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:43.138142 | orchestrator | 2026-03-02 01:12:43 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:43.138616 | orchestrator | 2026-03-02 01:12:43 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:43.138639 | orchestrator | 2026-03-02 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:46.174877 | orchestrator | 2026-03-02 01:12:46 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:46.174946 | orchestrator | 2026-03-02 01:12:46 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:46.174955 | orchestrator | 2026-03-02 01:12:46 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:49.216817 | orchestrator | 2026-03-02 01:12:49 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:49.217984 | orchestrator | 2026-03-02 01:12:49 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:49.218040 | orchestrator | 2026-03-02 01:12:49 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:52.252985 | orchestrator | 2026-03-02 01:12:52 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:52.255245 | orchestrator | 2026-03-02 01:12:52 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:52.255289 | orchestrator | 2026-03-02 01:12:52 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:55.300676 | orchestrator | 2026-03-02 01:12:55 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:55.303692 | orchestrator | 2026-03-02 01:12:55 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:55.303755 | orchestrator | 2026-03-02 01:12:55 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:12:58.341377 | orchestrator | 2026-03-02 01:12:58 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:12:58.342100 | orchestrator | 2026-03-02 01:12:58 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:12:58.342147 | orchestrator | 2026-03-02 01:12:58 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:01.380701 | orchestrator | 2026-03-02 01:13:01 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:01.383688 | orchestrator | 2026-03-02 01:13:01 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:01.383753 | orchestrator | 2026-03-02 01:13:01 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:04.418363 | orchestrator | 2026-03-02 01:13:04 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:04.419097 | orchestrator | 2026-03-02 01:13:04 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:04.419627 | orchestrator | 2026-03-02 01:13:04 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:07.469025 | orchestrator | 2026-03-02 01:13:07 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:07.470965 | orchestrator | 2026-03-02 01:13:07 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:07.471677 | orchestrator | 2026-03-02 01:13:07 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:10.513957 | orchestrator | 2026-03-02 01:13:10 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:10.515648 | orchestrator | 2026-03-02 01:13:10 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:10.515684 | orchestrator | 2026-03-02 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:13.564062 | orchestrator | 2026-03-02 01:13:13 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:13.565437 | orchestrator | 2026-03-02 01:13:13 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:13.565550 | orchestrator | 2026-03-02 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:16.597204 | orchestrator | 2026-03-02 01:13:16 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:16.599418 | orchestrator | 2026-03-02 01:13:16 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:16.599462 | orchestrator | 2026-03-02 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:19.624944 | orchestrator | 2026-03-02 01:13:19 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:19.627238 | orchestrator | 2026-03-02 01:13:19 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:19.627465 | orchestrator | 2026-03-02 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:22.668770 | orchestrator | 2026-03-02 01:13:22 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:22.671139 | orchestrator | 2026-03-02 01:13:22 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:22.671674 | orchestrator | 2026-03-02 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:25.715789 | orchestrator | 2026-03-02 01:13:25 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:25.717466 | orchestrator | 2026-03-02 01:13:25 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:25.717520 | orchestrator | 2026-03-02 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:28.764546 | orchestrator | 2026-03-02 01:13:28 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:28.764613 | orchestrator | 2026-03-02 01:13:28 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:28.764622 | orchestrator | 2026-03-02 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:31.803456 | orchestrator | 2026-03-02 01:13:31 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:31.805349 | orchestrator | 2026-03-02 01:13:31 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:31.805550 | orchestrator | 2026-03-02 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:34.854298 | orchestrator | 2026-03-02 01:13:34 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:34.856802 | orchestrator | 2026-03-02 01:13:34 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:34.856909 | orchestrator | 2026-03-02 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:37.897530 | orchestrator | 2026-03-02 01:13:37 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:37.898241 | orchestrator | 2026-03-02 01:13:37 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:37.898491 | orchestrator | 2026-03-02 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:40.933072 | orchestrator | 2026-03-02 01:13:40 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:40.933692 | orchestrator | 2026-03-02 01:13:40 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:40.933720 | orchestrator | 2026-03-02 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:43.962719 | orchestrator | 2026-03-02 01:13:43 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:43.963072 | orchestrator | 2026-03-02 01:13:43 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:43.963108 | orchestrator | 2026-03-02 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:46.997246 | orchestrator | 2026-03-02 01:13:46 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:46.997635 | orchestrator | 2026-03-02 01:13:47 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:46.997674 | orchestrator | 2026-03-02 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:50.030179 | orchestrator | 2026-03-02 01:13:50 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:50.031577 | orchestrator | 2026-03-02 01:13:50 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:50.031621 | orchestrator | 2026-03-02 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:53.080287 | orchestrator | 2026-03-02 01:13:53 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:53.083714 | orchestrator | 2026-03-02 01:13:53 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:53.083766 | orchestrator | 2026-03-02 01:13:53 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:56.133850 | orchestrator | 2026-03-02 01:13:56 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:56.135305 | orchestrator | 2026-03-02 01:13:56 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:56.135355 | orchestrator | 2026-03-02 01:13:56 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:13:59.176169 | orchestrator | 2026-03-02 01:13:59 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:13:59.177957 | orchestrator | 2026-03-02 01:13:59 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:13:59.178088 | orchestrator | 2026-03-02 01:13:59 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:02.215277 | orchestrator | 2026-03-02 01:14:02 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:14:02.217480 | orchestrator | 2026-03-02 01:14:02 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:02.217563 | orchestrator | 2026-03-02 01:14:02 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:05.265592 | orchestrator | 2026-03-02 01:14:05 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:14:05.268530 | orchestrator | 2026-03-02 01:14:05 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:05.268602 | orchestrator | 2026-03-02 01:14:05 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:08.311183 | orchestrator | 2026-03-02 01:14:08 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:14:08.313680 | orchestrator | 2026-03-02 01:14:08 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:08.314120 | orchestrator | 2026-03-02 01:14:08 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:11.367309 | orchestrator | 2026-03-02 01:14:11 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:14:11.368833 | orchestrator | 2026-03-02 01:14:11 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:11.368867 | orchestrator | 2026-03-02 01:14:11 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:14.411898 | orchestrator | 2026-03-02 01:14:14 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:14:14.412157 | orchestrator | 2026-03-02 01:14:14 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:14.412171 | orchestrator | 2026-03-02 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:17.456416 | orchestrator | 2026-03-02 01:14:17 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:14:17.461089 | orchestrator | 2026-03-02 01:14:17 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:17.461190 | orchestrator | 2026-03-02 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:20.498712 | orchestrator | 2026-03-02 01:14:20 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:14:20.502192 | orchestrator | 2026-03-02 01:14:20 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:20.502280 | orchestrator | 2026-03-02 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:23.543914 | orchestrator | 2026-03-02 01:14:23 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state STARTED 2026-03-02 01:14:23.545222 | orchestrator | 2026-03-02 01:14:23 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:23.545258 | orchestrator | 2026-03-02 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:26.590192 | orchestrator | 2026-03-02 01:14:26.590242 | orchestrator | 2026-03-02 01:14:26.590247 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:14:26.590252 | orchestrator | 2026-03-02 01:14:26.590256 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:14:26.590260 | orchestrator | Monday 02 March 2026 01:06:57 +0000 (0:00:00.183) 0:00:00.183 ********** 2026-03-02 01:14:26.590264 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:14:26.590268 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:14:26.590272 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:14:26.590276 | orchestrator | 2026-03-02 01:14:26.590280 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:14:26.590284 | orchestrator | Monday 02 March 2026 01:06:58 +0000 (0:00:00.264) 0:00:00.447 ********** 2026-03-02 01:14:26.590288 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-02 01:14:26.590292 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-02 01:14:26.590295 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-02 01:14:26.590299 | orchestrator | 2026-03-02 01:14:26.590303 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-02 01:14:26.590307 | orchestrator | 2026-03-02 01:14:26.590311 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-02 01:14:26.590315 | orchestrator | Monday 02 March 2026 01:06:58 +0000 (0:00:00.574) 0:00:01.022 ********** 2026-03-02 01:14:26.590318 | orchestrator | 2026-03-02 01:14:26.590329 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-02 01:14:26.590333 | orchestrator | 2026-03-02 01:14:26.590337 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-02 01:14:26.590341 | orchestrator | 2026-03-02 01:14:26.590345 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-02 01:14:26.590348 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:14:26.590352 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:14:26.590356 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:14:26.590359 | orchestrator | 2026-03-02 01:14:26.590363 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:14:26.590368 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:14:26.590372 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:14:26.590376 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:14:26.590380 | orchestrator | 2026-03-02 01:14:26.590384 | orchestrator | 2026-03-02 01:14:26.590388 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:14:26.590391 | orchestrator | Monday 02 March 2026 01:10:29 +0000 (0:03:30.705) 0:03:31.727 ********** 2026-03-02 01:14:26.590395 | orchestrator | =============================================================================== 2026-03-02 01:14:26.590409 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 210.71s 2026-03-02 01:14:26.590413 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-03-02 01:14:26.590417 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-03-02 01:14:26.590420 | orchestrator | 2026-03-02 01:14:26.590424 | orchestrator | 2026-03-02 01:14:26.590428 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:14:26.590431 | orchestrator | 2026-03-02 01:14:26.590435 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-02 01:14:26.590439 | orchestrator | Monday 02 March 2026 01:06:44 +0000 (0:00:00.205) 0:00:00.205 ********** 2026-03-02 01:14:26.590486 | orchestrator | changed: [testbed-manager] 2026-03-02 01:14:26.590493 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.590497 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:14:26.590500 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:14:26.590504 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.590508 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.590512 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.590516 | orchestrator | 2026-03-02 01:14:26.590520 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:14:26.590524 | orchestrator | Monday 02 March 2026 01:06:44 +0000 (0:00:00.634) 0:00:00.840 ********** 2026-03-02 01:14:26.590528 | orchestrator | changed: [testbed-manager] 2026-03-02 01:14:26.590609 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.590621 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:14:26.590629 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:14:26.590635 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.590641 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.590647 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.590654 | orchestrator | 2026-03-02 01:14:26.590660 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:14:26.590667 | orchestrator | Monday 02 March 2026 01:06:45 +0000 (0:00:00.603) 0:00:01.443 ********** 2026-03-02 01:14:26.590674 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-02 01:14:26.590681 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-02 01:14:26.590734 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-02 01:14:26.590740 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-02 01:14:26.590744 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-02 01:14:26.590757 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-02 01:14:26.590773 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-02 01:14:26.590777 | orchestrator | 2026-03-02 01:14:26.590781 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-02 01:14:26.590786 | orchestrator | 2026-03-02 01:14:26.590804 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-02 01:14:26.590827 | orchestrator | Monday 02 March 2026 01:06:46 +0000 (0:00:00.728) 0:00:02.172 ********** 2026-03-02 01:14:26.590844 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:14:26.590849 | orchestrator | 2026-03-02 01:14:26.590854 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-02 01:14:26.590860 | orchestrator | Monday 02 March 2026 01:06:46 +0000 (0:00:00.609) 0:00:02.781 ********** 2026-03-02 01:14:26.590866 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-02 01:14:26.590872 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-02 01:14:26.590878 | orchestrator | 2026-03-02 01:14:26.590885 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-02 01:14:26.590891 | orchestrator | Monday 02 March 2026 01:06:51 +0000 (0:00:04.824) 0:00:07.606 ********** 2026-03-02 01:14:26.590896 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-02 01:14:26.590908 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-02 01:14:26.590915 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.590922 | orchestrator | 2026-03-02 01:14:26.590927 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-02 01:14:26.590931 | orchestrator | Monday 02 March 2026 01:06:56 +0000 (0:00:05.141) 0:00:12.748 ********** 2026-03-02 01:14:26.590935 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.590939 | orchestrator | 2026-03-02 01:14:26.590943 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-02 01:14:26.590949 | orchestrator | Monday 02 March 2026 01:06:57 +0000 (0:00:00.610) 0:00:13.358 ********** 2026-03-02 01:14:26.590964 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.590971 | orchestrator | 2026-03-02 01:14:26.590978 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-02 01:14:26.590984 | orchestrator | Monday 02 March 2026 01:06:58 +0000 (0:00:01.277) 0:00:14.635 ********** 2026-03-02 01:14:26.590990 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.590996 | orchestrator | 2026-03-02 01:14:26.591002 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-02 01:14:26.591008 | orchestrator | Monday 02 March 2026 01:07:01 +0000 (0:00:02.431) 0:00:17.067 ********** 2026-03-02 01:14:26.591015 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.591021 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591027 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591032 | orchestrator | 2026-03-02 01:14:26.591036 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-02 01:14:26.591039 | orchestrator | Monday 02 March 2026 01:07:01 +0000 (0:00:00.243) 0:00:17.311 ********** 2026-03-02 01:14:26.591043 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:14:26.591047 | orchestrator | 2026-03-02 01:14:26.591051 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-02 01:14:26.591054 | orchestrator | Monday 02 March 2026 01:07:29 +0000 (0:00:28.219) 0:00:45.530 ********** 2026-03-02 01:14:26.591058 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.591062 | orchestrator | 2026-03-02 01:14:26.591065 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-02 01:14:26.591069 | orchestrator | Monday 02 March 2026 01:07:44 +0000 (0:00:15.045) 0:01:00.575 ********** 2026-03-02 01:14:26.591073 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:14:26.591077 | orchestrator | 2026-03-02 01:14:26.591080 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-02 01:14:26.591084 | orchestrator | Monday 02 March 2026 01:07:57 +0000 (0:00:13.124) 0:01:13.700 ********** 2026-03-02 01:14:26.591088 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:14:26.591091 | orchestrator | 2026-03-02 01:14:26.591095 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-02 01:14:26.591099 | orchestrator | Monday 02 March 2026 01:07:58 +0000 (0:00:01.149) 0:01:14.850 ********** 2026-03-02 01:14:26.591102 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.591106 | orchestrator | 2026-03-02 01:14:26.591110 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-02 01:14:26.591113 | orchestrator | Monday 02 March 2026 01:07:59 +0000 (0:00:00.502) 0:01:15.352 ********** 2026-03-02 01:14:26.591117 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:14:26.591121 | orchestrator | 2026-03-02 01:14:26.591125 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-02 01:14:26.591129 | orchestrator | Monday 02 March 2026 01:07:59 +0000 (0:00:00.483) 0:01:15.835 ********** 2026-03-02 01:14:26.591133 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:14:26.591136 | orchestrator | 2026-03-02 01:14:26.591140 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-02 01:14:26.591144 | orchestrator | Monday 02 March 2026 01:08:17 +0000 (0:00:17.065) 0:01:32.901 ********** 2026-03-02 01:14:26.591151 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.591155 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591159 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591163 | orchestrator | 2026-03-02 01:14:26.591166 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-02 01:14:26.591170 | orchestrator | 2026-03-02 01:14:26.591174 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-02 01:14:26.591177 | orchestrator | Monday 02 March 2026 01:08:17 +0000 (0:00:00.271) 0:01:33.172 ********** 2026-03-02 01:14:26.591181 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:14:26.591185 | orchestrator | 2026-03-02 01:14:26.591189 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-02 01:14:26.591192 | orchestrator | Monday 02 March 2026 01:08:17 +0000 (0:00:00.509) 0:01:33.682 ********** 2026-03-02 01:14:26.591196 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591199 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591203 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.591207 | orchestrator | 2026-03-02 01:14:26.591211 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-02 01:14:26.591214 | orchestrator | Monday 02 March 2026 01:08:19 +0000 (0:00:01.981) 0:01:35.663 ********** 2026-03-02 01:14:26.591218 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591222 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591230 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.591234 | orchestrator | 2026-03-02 01:14:26.591238 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-02 01:14:26.591242 | orchestrator | Monday 02 March 2026 01:08:21 +0000 (0:00:02.009) 0:01:37.672 ********** 2026-03-02 01:14:26.591245 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.591249 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591253 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591256 | orchestrator | 2026-03-02 01:14:26.591260 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-02 01:14:26.591264 | orchestrator | Monday 02 March 2026 01:08:22 +0000 (0:00:00.290) 0:01:37.963 ********** 2026-03-02 01:14:26.591267 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-02 01:14:26.591271 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591275 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-02 01:14:26.591279 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591282 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-02 01:14:26.591286 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-02 01:14:26.591290 | orchestrator | 2026-03-02 01:14:26.591293 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-02 01:14:26.591308 | orchestrator | Monday 02 March 2026 01:08:29 +0000 (0:00:06.965) 0:01:44.929 ********** 2026-03-02 01:14:26.591312 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.591318 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591322 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591326 | orchestrator | 2026-03-02 01:14:26.591330 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-02 01:14:26.591355 | orchestrator | Monday 02 March 2026 01:08:29 +0000 (0:00:00.309) 0:01:45.238 ********** 2026-03-02 01:14:26.591364 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-02 01:14:26.591370 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.591376 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-02 01:14:26.591382 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591388 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-02 01:14:26.591393 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591399 | orchestrator | 2026-03-02 01:14:26.591405 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-02 01:14:26.591412 | orchestrator | Monday 02 March 2026 01:08:29 +0000 (0:00:00.561) 0:01:45.800 ********** 2026-03-02 01:14:26.591422 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591429 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591436 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.591464 | orchestrator | 2026-03-02 01:14:26.591471 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-02 01:14:26.591475 | orchestrator | Monday 02 March 2026 01:08:30 +0000 (0:00:00.581) 0:01:46.381 ********** 2026-03-02 01:14:26.591478 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591482 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591486 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.591490 | orchestrator | 2026-03-02 01:14:26.591494 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-02 01:14:26.591497 | orchestrator | Monday 02 March 2026 01:08:31 +0000 (0:00:00.841) 0:01:47.223 ********** 2026-03-02 01:14:26.591501 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591505 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591508 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.591512 | orchestrator | 2026-03-02 01:14:26.591516 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-02 01:14:26.591536 | orchestrator | Monday 02 March 2026 01:08:33 +0000 (0:00:01.795) 0:01:49.018 ********** 2026-03-02 01:14:26.591540 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591544 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591554 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:14:26.591557 | orchestrator | 2026-03-02 01:14:26.591561 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-02 01:14:26.591565 | orchestrator | Monday 02 March 2026 01:08:54 +0000 (0:00:20.869) 0:02:09.888 ********** 2026-03-02 01:14:26.591569 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591572 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591576 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:14:26.591580 | orchestrator | 2026-03-02 01:14:26.591583 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-02 01:14:26.591587 | orchestrator | Monday 02 March 2026 01:09:06 +0000 (0:00:12.027) 0:02:21.915 ********** 2026-03-02 01:14:26.591591 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:14:26.591595 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591598 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591602 | orchestrator | 2026-03-02 01:14:26.591606 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-02 01:14:26.591609 | orchestrator | Monday 02 March 2026 01:09:07 +0000 (0:00:01.479) 0:02:23.395 ********** 2026-03-02 01:14:26.591613 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591617 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591620 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.591624 | orchestrator | 2026-03-02 01:14:26.591628 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-02 01:14:26.591631 | orchestrator | Monday 02 March 2026 01:09:19 +0000 (0:00:11.741) 0:02:35.136 ********** 2026-03-02 01:14:26.591635 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.591639 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591642 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591646 | orchestrator | 2026-03-02 01:14:26.591650 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-02 01:14:26.591653 | orchestrator | Monday 02 March 2026 01:09:20 +0000 (0:00:01.086) 0:02:36.223 ********** 2026-03-02 01:14:26.591657 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.591661 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591664 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591668 | orchestrator | 2026-03-02 01:14:26.591672 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-02 01:14:26.591676 | orchestrator | 2026-03-02 01:14:26.591683 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-02 01:14:26.591691 | orchestrator | Monday 02 March 2026 01:09:20 +0000 (0:00:00.506) 0:02:36.730 ********** 2026-03-02 01:14:26.591695 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:14:26.591699 | orchestrator | 2026-03-02 01:14:26.591703 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-02 01:14:26.591707 | orchestrator | Monday 02 March 2026 01:09:21 +0000 (0:00:00.523) 0:02:37.254 ********** 2026-03-02 01:14:26.591711 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-02 01:14:26.591714 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-02 01:14:26.591718 | orchestrator | 2026-03-02 01:14:26.591722 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-02 01:14:26.591725 | orchestrator | Monday 02 March 2026 01:09:24 +0000 (0:00:02.949) 0:02:40.204 ********** 2026-03-02 01:14:26.591729 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-02 01:14:26.591734 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-02 01:14:26.591740 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-02 01:14:26.591745 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-02 01:14:26.591748 | orchestrator | 2026-03-02 01:14:26.591752 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-02 01:14:26.591756 | orchestrator | Monday 02 March 2026 01:09:30 +0000 (0:00:06.045) 0:02:46.250 ********** 2026-03-02 01:14:26.591760 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-02 01:14:26.591763 | orchestrator | 2026-03-02 01:14:26.591767 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-02 01:14:26.591771 | orchestrator | Monday 02 March 2026 01:09:33 +0000 (0:00:02.669) 0:02:48.919 ********** 2026-03-02 01:14:26.591775 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-02 01:14:26.591779 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-02 01:14:26.591782 | orchestrator | 2026-03-02 01:14:26.591786 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-02 01:14:26.591790 | orchestrator | Monday 02 March 2026 01:09:36 +0000 (0:00:03.501) 0:02:52.420 ********** 2026-03-02 01:14:26.591793 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-02 01:14:26.591797 | orchestrator | 2026-03-02 01:14:26.591801 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-02 01:14:26.591805 | orchestrator | Monday 02 March 2026 01:09:39 +0000 (0:00:03.255) 0:02:55.676 ********** 2026-03-02 01:14:26.591808 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-02 01:14:26.591812 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-02 01:14:26.591816 | orchestrator | 2026-03-02 01:14:26.591819 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-02 01:14:26.591823 | orchestrator | Monday 02 March 2026 01:09:46 +0000 (0:00:06.931) 0:03:02.607 ********** 2026-03-02 01:14:26.591832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.591855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.591867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.591875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.591881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.591891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.591898 | orchestrator | 2026-03-02 01:14:26.591905 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-02 01:14:26.591911 | orchestrator | Monday 02 March 2026 01:09:48 +0000 (0:00:01.440) 0:03:04.048 ********** 2026-03-02 01:14:26.591917 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.591924 | orchestrator | 2026-03-02 01:14:26.591928 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-02 01:14:26.591931 | orchestrator | Monday 02 March 2026 01:09:48 +0000 (0:00:00.138) 0:03:04.187 ********** 2026-03-02 01:14:26.591935 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.591942 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.591949 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.591956 | orchestrator | 2026-03-02 01:14:26.591962 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-02 01:14:26.591967 | orchestrator | Monday 02 March 2026 01:09:48 +0000 (0:00:00.444) 0:03:04.632 ********** 2026-03-02 01:14:26.591974 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-02 01:14:26.591980 | orchestrator | 2026-03-02 01:14:26.591987 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-02 01:14:26.591993 | orchestrator | Monday 02 March 2026 01:09:49 +0000 (0:00:00.694) 0:03:05.326 ********** 2026-03-02 01:14:26.591999 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.592005 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.592009 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.592013 | orchestrator | 2026-03-02 01:14:26.592016 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-02 01:14:26.592020 | orchestrator | Monday 02 March 2026 01:09:49 +0000 (0:00:00.258) 0:03:05.584 ********** 2026-03-02 01:14:26.592024 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:14:26.592028 | orchestrator | 2026-03-02 01:14:26.592031 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-02 01:14:26.592035 | orchestrator | Monday 02 March 2026 01:09:50 +0000 (0:00:00.541) 0:03:06.126 ********** 2026-03-02 01:14:26.592042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592083 | orchestrator | 2026-03-02 01:14:26.592087 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-02 01:14:26.592091 | orchestrator | Monday 02 March 2026 01:09:52 +0000 (0:00:02.483) 0:03:08.609 ********** 2026-03-02 01:14:26.592095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 01:14:26.592103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.592107 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.592113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 01:14:26.592118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.592124 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.592128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 01:14:26.592132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.592136 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.592140 | orchestrator | 2026-03-02 01:14:26.592144 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-02 01:14:26.592148 | orchestrator | Monday 02 March 2026 01:09:53 +0000 (0:00:00.591) 0:03:09.201 ********** 2026-03-02 01:14:26.592153 | orchestrator | ski2026-03-02 01:14:26 | INFO  | Task beb6fba3-4a1f-4a81-be47-fa0f340ac728 is in state SUCCESS 2026-03-02 01:14:26.592429 | orchestrator | pping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 01:14:26.592440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.592466 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.592473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 01:14:26.592480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.592487 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.592495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 01:14:26.592501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.592508 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.592512 | orchestrator | 2026-03-02 01:14:26.592516 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-02 01:14:26.592520 | orchestrator | Monday 02 March 2026 01:09:54 +0000 (0:00:00.693) 0:03:09.894 ********** 2026-03-02 01:14:26.592524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592557 | orchestrator | 2026-03-02 01:14:26.592561 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-02 01:14:26.592565 | orchestrator | Monday 02 March 2026 01:09:56 +0000 (0:00:02.455) 0:03:12.350 ********** 2026-03-02 01:14:26.592572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592605 | orchestrator | 2026-03-02 01:14:26.592609 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-02 01:14:26.592612 | orchestrator | Monday 02 March 2026 01:10:01 +0000 (0:00:04.811) 0:03:17.161 ********** 2026-03-02 01:14:26.592621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 01:14:26.592625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.592629 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.592633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 01:14:26.592639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.592643 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.592651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-02 01:14:26.592658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.592662 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.592668 | orchestrator | 2026-03-02 01:14:26.592674 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-02 01:14:26.592681 | orchestrator | Monday 02 March 2026 01:10:01 +0000 (0:00:00.507) 0:03:17.669 ********** 2026-03-02 01:14:26.592686 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.592695 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:14:26.592702 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:14:26.592708 | orchestrator | 2026-03-02 01:14:26.592714 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-02 01:14:26.592720 | orchestrator | Monday 02 March 2026 01:10:03 +0000 (0:00:01.365) 0:03:19.035 ********** 2026-03-02 01:14:26.592726 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.592732 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.592737 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.592743 | orchestrator | 2026-03-02 01:14:26.592750 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-02 01:14:26.592756 | orchestrator | Monday 02 March 2026 01:10:03 +0000 (0:00:00.290) 0:03:19.325 ********** 2026-03-02 01:14:26.592762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-02 01:14:26.592798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.592850 | orchestrator | 2026-03-02 01:14:26.592863 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-02 01:14:26.592868 | orchestrator | Monday 02 March 2026 01:10:05 +0000 (0:00:01.768) 0:03:21.093 ********** 2026-03-02 01:14:26.592872 | orchestrator | 2026-03-02 01:14:26.592890 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-02 01:14:26.592894 | orchestrator | Monday 02 March 2026 01:10:05 +0000 (0:00:00.128) 0:03:21.221 ********** 2026-03-02 01:14:26.592898 | orchestrator | 2026-03-02 01:14:26.592905 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-02 01:14:26.592909 | orchestrator | Monday 02 March 2026 01:10:05 +0000 (0:00:00.118) 0:03:21.340 ********** 2026-03-02 01:14:26.592913 | orchestrator | 2026-03-02 01:14:26.592917 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-02 01:14:26.592928 | orchestrator | Monday 02 March 2026 01:10:05 +0000 (0:00:00.121) 0:03:21.461 ********** 2026-03-02 01:14:26.592932 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.592936 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:14:26.592940 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:14:26.592945 | orchestrator | 2026-03-02 01:14:26.592953 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-02 01:14:26.592962 | orchestrator | Monday 02 March 2026 01:10:23 +0000 (0:00:17.546) 0:03:39.008 ********** 2026-03-02 01:14:26.592973 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.592980 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:14:26.592986 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:14:26.592993 | orchestrator | 2026-03-02 01:14:26.592999 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-02 01:14:26.593006 | orchestrator | 2026-03-02 01:14:26.593010 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-02 01:14:26.593016 | orchestrator | Monday 02 March 2026 01:10:32 +0000 (0:00:09.681) 0:03:48.689 ********** 2026-03-02 01:14:26.593026 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:14:26.593033 | orchestrator | 2026-03-02 01:14:26.593039 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-02 01:14:26.593045 | orchestrator | Monday 02 March 2026 01:10:33 +0000 (0:00:01.056) 0:03:49.745 ********** 2026-03-02 01:14:26.593051 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.593057 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.593086 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.593094 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.593101 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.593107 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.593114 | orchestrator | 2026-03-02 01:14:26.593120 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-02 01:14:26.593127 | orchestrator | Monday 02 March 2026 01:10:34 +0000 (0:00:00.529) 0:03:50.274 ********** 2026-03-02 01:14:26.593133 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.593140 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.593146 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.593152 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 01:14:26.593158 | orchestrator | 2026-03-02 01:14:26.593164 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-02 01:14:26.593171 | orchestrator | Monday 02 March 2026 01:10:35 +0000 (0:00:00.870) 0:03:51.145 ********** 2026-03-02 01:14:26.593178 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-02 01:14:26.593191 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-02 01:14:26.593198 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-02 01:14:26.593205 | orchestrator | 2026-03-02 01:14:26.593212 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-02 01:14:26.593218 | orchestrator | Monday 02 March 2026 01:10:35 +0000 (0:00:00.710) 0:03:51.855 ********** 2026-03-02 01:14:26.593225 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-02 01:14:26.593231 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-02 01:14:26.593238 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-02 01:14:26.593244 | orchestrator | 2026-03-02 01:14:26.593250 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-02 01:14:26.593257 | orchestrator | Monday 02 March 2026 01:10:37 +0000 (0:00:01.380) 0:03:53.236 ********** 2026-03-02 01:14:26.593263 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-02 01:14:26.593270 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.593276 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-02 01:14:26.593283 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.593290 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-02 01:14:26.593297 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.593304 | orchestrator | 2026-03-02 01:14:26.593309 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-02 01:14:26.593315 | orchestrator | Monday 02 March 2026 01:10:37 +0000 (0:00:00.462) 0:03:53.698 ********** 2026-03-02 01:14:26.593321 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-02 01:14:26.593328 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-02 01:14:26.593335 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.593341 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-02 01:14:26.593348 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-02 01:14:26.593355 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.593361 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-02 01:14:26.593367 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-02 01:14:26.593374 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.593385 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-02 01:14:26.593391 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-02 01:14:26.593397 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-02 01:14:26.593403 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-02 01:14:26.593409 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-02 01:14:26.593415 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-02 01:14:26.593421 | orchestrator | 2026-03-02 01:14:26.593427 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-02 01:14:26.593433 | orchestrator | Monday 02 March 2026 01:10:38 +0000 (0:00:01.073) 0:03:54.772 ********** 2026-03-02 01:14:26.593439 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.593457 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.593462 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.593468 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.593474 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.593481 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.593487 | orchestrator | 2026-03-02 01:14:26.593498 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-02 01:14:26.593504 | orchestrator | Monday 02 March 2026 01:10:39 +0000 (0:00:01.063) 0:03:55.835 ********** 2026-03-02 01:14:26.593515 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.593521 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.593528 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.593534 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.593541 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.593547 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.593553 | orchestrator | 2026-03-02 01:14:26.593560 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-02 01:14:26.593566 | orchestrator | Monday 02 March 2026 01:10:41 +0000 (0:00:01.665) 0:03:57.500 ********** 2026-03-02 01:14:26.593574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593582 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593589 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593601 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593651 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593694 | orchestrator | 2026-03-02 01:14:26.593700 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-02 01:14:26.593706 | orchestrator | Monday 02 March 2026 01:10:43 +0000 (0:00:02.109) 0:03:59.610 ********** 2026-03-02 01:14:26.593713 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:14:26.593720 | orchestrator | 2026-03-02 01:14:26.593726 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-02 01:14:26.593732 | orchestrator | Monday 02 March 2026 01:10:44 +0000 (0:00:01.031) 0:04:00.642 ********** 2026-03-02 01:14:26.593743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.593878 | orchestrator | 2026-03-02 01:14:26.593882 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-02 01:14:26.593886 | orchestrator | Monday 02 March 2026 01:10:47 +0000 (0:00:03.153) 0:04:03.795 ********** 2026-03-02 01:14:26.593890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.593895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.593901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.593908 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.593914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.593918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.593922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.593926 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.593930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.593934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.593945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.593957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-02 01:14:26.593966 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.593972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.593978 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.593984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-02 01:14:26.593990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.593997 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.594002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-02 01:14:26.594266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.594279 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.594283 | orchestrator | 2026-03-02 01:14:26.594287 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-02 01:14:26.594291 | orchestrator | Monday 02 March 2026 01:10:49 +0000 (0:00:01.142) 0:04:04.938 ********** 2026-03-02 01:14:26.594298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.594303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.594307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.594311 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.594315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.594325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.594333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.594337 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.594343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.594347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.594351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.594357 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.594362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-02 01:14:26.594368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.594372 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.594376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-02 01:14:26.594380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.594384 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.594388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-02 01:14:26.594392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.594398 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.594402 | orchestrator | 2026-03-02 01:14:26.594419 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-02 01:14:26.594423 | orchestrator | Monday 02 March 2026 01:10:50 +0000 (0:00:01.819) 0:04:06.758 ********** 2026-03-02 01:14:26.594427 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.594431 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.594435 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.594439 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 01:14:26.594455 | orchestrator | 2026-03-02 01:14:26.594462 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-02 01:14:26.594468 | orchestrator | Monday 02 March 2026 01:10:51 +0000 (0:00:00.856) 0:04:07.614 ********** 2026-03-02 01:14:26.594475 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-02 01:14:26.594481 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-02 01:14:26.594485 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-02 01:14:26.594490 | orchestrator | 2026-03-02 01:14:26.594498 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-02 01:14:26.594507 | orchestrator | Monday 02 March 2026 01:10:52 +0000 (0:00:00.859) 0:04:08.474 ********** 2026-03-02 01:14:26.594514 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-02 01:14:26.594520 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-02 01:14:26.594526 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-02 01:14:26.594532 | orchestrator | 2026-03-02 01:14:26.594538 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-02 01:14:26.594543 | orchestrator | Monday 02 March 2026 01:10:53 +0000 (0:00:00.805) 0:04:09.280 ********** 2026-03-02 01:14:26.594549 | orchestrator | ok: [testbed-node-3] 2026-03-02 01:14:26.594555 | orchestrator | ok: [testbed-node-4] 2026-03-02 01:14:26.594562 | orchestrator | ok: [testbed-node-5] 2026-03-02 01:14:26.594568 | orchestrator | 2026-03-02 01:14:26.594576 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-02 01:14:26.594583 | orchestrator | Monday 02 March 2026 01:10:54 +0000 (0:00:00.626) 0:04:09.906 ********** 2026-03-02 01:14:26.594587 | orchestrator | ok: [testbed-node-3] 2026-03-02 01:14:26.594591 | orchestrator | ok: [testbed-node-4] 2026-03-02 01:14:26.594594 | orchestrator | ok: [testbed-node-5] 2026-03-02 01:14:26.594598 | orchestrator | 2026-03-02 01:14:26.594605 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-02 01:14:26.594609 | orchestrator | Monday 02 March 2026 01:10:54 +0000 (0:00:00.597) 0:04:10.503 ********** 2026-03-02 01:14:26.594613 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-02 01:14:26.594752 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-02 01:14:26.594763 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-02 01:14:26.594769 | orchestrator | 2026-03-02 01:14:26.594775 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-02 01:14:26.594781 | orchestrator | Monday 02 March 2026 01:10:55 +0000 (0:00:01.078) 0:04:11.582 ********** 2026-03-02 01:14:26.594787 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-02 01:14:26.594794 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-02 01:14:26.594800 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-02 01:14:26.594806 | orchestrator | 2026-03-02 01:14:26.594812 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-02 01:14:26.594818 | orchestrator | Monday 02 March 2026 01:10:56 +0000 (0:00:01.080) 0:04:12.662 ********** 2026-03-02 01:14:26.594831 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-02 01:14:26.594837 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-02 01:14:26.594849 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-02 01:14:26.594855 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-02 01:14:26.594861 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-02 01:14:26.594867 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-02 01:14:26.594873 | orchestrator | 2026-03-02 01:14:26.594879 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-02 01:14:26.594885 | orchestrator | Monday 02 March 2026 01:11:00 +0000 (0:00:03.602) 0:04:16.264 ********** 2026-03-02 01:14:26.594891 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.594897 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.594903 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.594909 | orchestrator | 2026-03-02 01:14:26.594915 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-02 01:14:26.594922 | orchestrator | Monday 02 March 2026 01:11:00 +0000 (0:00:00.474) 0:04:16.739 ********** 2026-03-02 01:14:26.594927 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.594934 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.594940 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.594945 | orchestrator | 2026-03-02 01:14:26.594952 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-02 01:14:26.594958 | orchestrator | Monday 02 March 2026 01:11:01 +0000 (0:00:00.298) 0:04:17.037 ********** 2026-03-02 01:14:26.594964 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.594970 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.594976 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.594982 | orchestrator | 2026-03-02 01:14:26.594988 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-02 01:14:26.594994 | orchestrator | Monday 02 March 2026 01:11:02 +0000 (0:00:01.201) 0:04:18.238 ********** 2026-03-02 01:14:26.595001 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-02 01:14:26.595007 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-02 01:14:26.595013 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-02 01:14:26.595020 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-02 01:14:26.595027 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-02 01:14:26.595036 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-02 01:14:26.595042 | orchestrator | 2026-03-02 01:14:26.595048 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-02 01:14:26.595054 | orchestrator | Monday 02 March 2026 01:11:05 +0000 (0:00:03.169) 0:04:21.407 ********** 2026-03-02 01:14:26.595060 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-02 01:14:26.595066 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-02 01:14:26.595072 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-02 01:14:26.595078 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-02 01:14:26.595084 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.595090 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-02 01:14:26.595097 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.595103 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-02 01:14:26.595109 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.595115 | orchestrator | 2026-03-02 01:14:26.595121 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-02 01:14:26.595127 | orchestrator | Monday 02 March 2026 01:11:08 +0000 (0:00:03.023) 0:04:24.431 ********** 2026-03-02 01:14:26.595137 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.595143 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.595149 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.595155 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-02 01:14:26.595161 | orchestrator | 2026-03-02 01:14:26.595167 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-02 01:14:26.595192 | orchestrator | Monday 02 March 2026 01:11:10 +0000 (0:00:01.528) 0:04:25.960 ********** 2026-03-02 01:14:26.595200 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-02 01:14:26.595206 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-02 01:14:26.595212 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-02 01:14:26.595218 | orchestrator | 2026-03-02 01:14:26.595224 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-02 01:14:26.595230 | orchestrator | Monday 02 March 2026 01:11:11 +0000 (0:00:01.175) 0:04:27.135 ********** 2026-03-02 01:14:26.595237 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.595243 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.595250 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.595256 | orchestrator | 2026-03-02 01:14:26.595262 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-02 01:14:26.595269 | orchestrator | Monday 02 March 2026 01:11:11 +0000 (0:00:00.364) 0:04:27.499 ********** 2026-03-02 01:14:26.595273 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.595277 | orchestrator | 2026-03-02 01:14:26.595281 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-02 01:14:26.595284 | orchestrator | Monday 02 March 2026 01:11:11 +0000 (0:00:00.119) 0:04:27.618 ********** 2026-03-02 01:14:26.595291 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.595295 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.595299 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.595303 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.595306 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.595310 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.595314 | orchestrator | 2026-03-02 01:14:26.595318 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-02 01:14:26.595321 | orchestrator | Monday 02 March 2026 01:11:12 +0000 (0:00:00.765) 0:04:28.384 ********** 2026-03-02 01:14:26.595325 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-02 01:14:26.595329 | orchestrator | 2026-03-02 01:14:26.595332 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-02 01:14:26.595336 | orchestrator | Monday 02 March 2026 01:11:13 +0000 (0:00:00.716) 0:04:29.101 ********** 2026-03-02 01:14:26.595340 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.595343 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.595347 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.595351 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.595355 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.595358 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.595362 | orchestrator | 2026-03-02 01:14:26.595366 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-02 01:14:26.595370 | orchestrator | Monday 02 March 2026 01:11:13 +0000 (0:00:00.589) 0:04:29.691 ********** 2026-03-02 01:14:26.595374 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595406 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595480 | orchestrator | 2026-03-02 01:14:26.595484 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-02 01:14:26.595489 | orchestrator | Monday 02 March 2026 01:11:17 +0000 (0:00:03.715) 0:04:33.406 ********** 2026-03-02 01:14:26.595496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.595503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.595508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.595515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.595519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.595525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.595532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.595579 | orchestrator | 2026-03-02 01:14:26.595583 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-02 01:14:26.595587 | orchestrator | Monday 02 March 2026 01:11:23 +0000 (0:00:06.053) 0:04:39.460 ********** 2026-03-02 01:14:26.595591 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.595595 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.595598 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.595602 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.595606 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.595610 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.595613 | orchestrator | 2026-03-02 01:14:26.595617 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-02 01:14:26.595621 | orchestrator | Monday 02 March 2026 01:11:25 +0000 (0:00:01.824) 0:04:41.285 ********** 2026-03-02 01:14:26.595625 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-02 01:14:26.595629 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-02 01:14:26.595633 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-02 01:14:26.595636 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-02 01:14:26.595640 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-02 01:14:26.595644 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-02 01:14:26.595648 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.595652 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-02 01:14:26.595656 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.595659 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-02 01:14:26.595663 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-02 01:14:26.595667 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.595671 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-02 01:14:26.595675 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-02 01:14:26.595678 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-02 01:14:26.595682 | orchestrator | 2026-03-02 01:14:26.595686 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-02 01:14:26.595690 | orchestrator | Monday 02 March 2026 01:11:28 +0000 (0:00:03.391) 0:04:44.676 ********** 2026-03-02 01:14:26.595694 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.595697 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.595701 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.595705 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.595709 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.595714 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.595718 | orchestrator | 2026-03-02 01:14:26.595722 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-02 01:14:26.595726 | orchestrator | Monday 02 March 2026 01:11:29 +0000 (0:00:00.625) 0:04:45.301 ********** 2026-03-02 01:14:26.595730 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-02 01:14:26.595737 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-02 01:14:26.595741 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-02 01:14:26.595745 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-02 01:14:26.595749 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-02 01:14:26.595754 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-02 01:14:26.595758 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-02 01:14:26.595762 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-02 01:14:26.595766 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-02 01:14:26.595770 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-02 01:14:26.595773 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.595777 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-02 01:14:26.595781 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.595785 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-02 01:14:26.595788 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.595792 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-02 01:14:26.595796 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-02 01:14:26.595799 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-02 01:14:26.595803 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-02 01:14:26.595807 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-02 01:14:26.595811 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-02 01:14:26.595814 | orchestrator | 2026-03-02 01:14:26.595818 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-02 01:14:26.595822 | orchestrator | Monday 02 March 2026 01:11:34 +0000 (0:00:05.058) 0:04:50.360 ********** 2026-03-02 01:14:26.595826 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-02 01:14:26.595830 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-02 01:14:26.595834 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-02 01:14:26.595837 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-02 01:14:26.595841 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-02 01:14:26.595845 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-02 01:14:26.595849 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-02 01:14:26.595852 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-02 01:14:26.595856 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-02 01:14:26.595862 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-02 01:14:26.595866 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-02 01:14:26.595870 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-02 01:14:26.595873 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.595877 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-02 01:14:26.595881 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-02 01:14:26.595885 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.595890 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-02 01:14:26.595894 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.595898 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-02 01:14:26.595902 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-02 01:14:26.595906 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-02 01:14:26.595909 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-02 01:14:26.595913 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-02 01:14:26.595917 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-02 01:14:26.595920 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-02 01:14:26.595924 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-02 01:14:26.595930 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-02 01:14:26.595934 | orchestrator | 2026-03-02 01:14:26.595938 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-02 01:14:26.595942 | orchestrator | Monday 02 March 2026 01:11:40 +0000 (0:00:06.320) 0:04:56.680 ********** 2026-03-02 01:14:26.595945 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.595949 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.595953 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.595956 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.595960 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.595964 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.595968 | orchestrator | 2026-03-02 01:14:26.595971 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-02 01:14:26.595975 | orchestrator | Monday 02 March 2026 01:11:41 +0000 (0:00:00.697) 0:04:57.378 ********** 2026-03-02 01:14:26.595979 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.595983 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.595986 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.595990 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.595994 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.595997 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.596001 | orchestrator | 2026-03-02 01:14:26.596005 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-02 01:14:26.596009 | orchestrator | Monday 02 March 2026 01:11:42 +0000 (0:00:00.544) 0:04:57.922 ********** 2026-03-02 01:14:26.596015 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.596021 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.596027 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.596034 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.596039 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.596045 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.596050 | orchestrator | 2026-03-02 01:14:26.596055 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-02 01:14:26.596065 | orchestrator | Monday 02 March 2026 01:11:43 +0000 (0:00:01.945) 0:04:59.867 ********** 2026-03-02 01:14:26.596071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.596078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.596090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.596098 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.596107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.596114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.596119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.596126 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.596130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-02 01:14:26.596134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-02 01:14:26.596141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.596145 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.596150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-02 01:14:26.596155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-02 01:14:26.596161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.596165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.596169 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.596173 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.596176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-02 01:14:26.596183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-02 01:14:26.596188 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.596191 | orchestrator | 2026-03-02 01:14:26.596195 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-02 01:14:26.596199 | orchestrator | Monday 02 March 2026 01:11:45 +0000 (0:00:01.505) 0:05:01.373 ********** 2026-03-02 01:14:26.596203 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-02 01:14:26.596207 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-02 01:14:26.596211 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.596214 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-02 01:14:26.596218 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-02 01:14:26.596224 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.596228 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-02 01:14:26.596231 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-02 01:14:26.596235 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.596239 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-02 01:14:26.596245 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-02 01:14:26.596249 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.596253 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-02 01:14:26.596256 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-02 01:14:26.596260 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.596264 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-02 01:14:26.596268 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-02 01:14:26.596271 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.596275 | orchestrator | 2026-03-02 01:14:26.596279 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-02 01:14:26.596282 | orchestrator | Monday 02 March 2026 01:11:46 +0000 (0:00:00.707) 0:05:02.080 ********** 2026-03-02 01:14:26.596286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596314 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596322 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-02 01:14:26.596362 | orchestrator | 2026-03-02 01:14:26.596366 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-02 01:14:26.596370 | orchestrator | Monday 02 March 2026 01:11:48 +0000 (0:00:02.675) 0:05:04.756 ********** 2026-03-02 01:14:26.596374 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.596378 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.596382 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.596385 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.596390 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.596396 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.596402 | orchestrator | 2026-03-02 01:14:26.596416 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-02 01:14:26.596423 | orchestrator | Monday 02 March 2026 01:11:49 +0000 (0:00:00.754) 0:05:05.510 ********** 2026-03-02 01:14:26.596429 | orchestrator | 2026-03-02 01:14:26.596434 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-02 01:14:26.596440 | orchestrator | Monday 02 March 2026 01:11:49 +0000 (0:00:00.138) 0:05:05.649 ********** 2026-03-02 01:14:26.596460 | orchestrator | 2026-03-02 01:14:26.596466 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-02 01:14:26.596485 | orchestrator | Monday 02 March 2026 01:11:49 +0000 (0:00:00.139) 0:05:05.788 ********** 2026-03-02 01:14:26.596491 | orchestrator | 2026-03-02 01:14:26.596496 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-02 01:14:26.596502 | orchestrator | Monday 02 March 2026 01:11:50 +0000 (0:00:00.145) 0:05:05.934 ********** 2026-03-02 01:14:26.596508 | orchestrator | 2026-03-02 01:14:26.596514 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-02 01:14:26.596519 | orchestrator | Monday 02 March 2026 01:11:50 +0000 (0:00:00.414) 0:05:06.348 ********** 2026-03-02 01:14:26.596525 | orchestrator | 2026-03-02 01:14:26.596533 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-02 01:14:26.596539 | orchestrator | Monday 02 March 2026 01:11:50 +0000 (0:00:00.149) 0:05:06.498 ********** 2026-03-02 01:14:26.596545 | orchestrator | 2026-03-02 01:14:26.596551 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-02 01:14:26.596556 | orchestrator | Monday 02 March 2026 01:11:50 +0000 (0:00:00.145) 0:05:06.643 ********** 2026-03-02 01:14:26.596562 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.596568 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:14:26.596574 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:14:26.596579 | orchestrator | 2026-03-02 01:14:26.596585 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-02 01:14:26.596591 | orchestrator | Monday 02 March 2026 01:11:57 +0000 (0:00:06.414) 0:05:13.058 ********** 2026-03-02 01:14:26.596597 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.596602 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:14:26.596608 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:14:26.596614 | orchestrator | 2026-03-02 01:14:26.596619 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-02 01:14:26.596625 | orchestrator | Monday 02 March 2026 01:12:13 +0000 (0:00:15.908) 0:05:28.966 ********** 2026-03-02 01:14:26.596631 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.596637 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.596643 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.596648 | orchestrator | 2026-03-02 01:14:26.596654 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-02 01:14:26.596660 | orchestrator | Monday 02 March 2026 01:12:28 +0000 (0:00:15.154) 0:05:44.121 ********** 2026-03-02 01:14:26.596665 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.596671 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.596677 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.596683 | orchestrator | 2026-03-02 01:14:26.596688 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-02 01:14:26.596694 | orchestrator | Monday 02 March 2026 01:12:55 +0000 (0:00:27.056) 0:06:11.177 ********** 2026-03-02 01:14:26.596700 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.596705 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.596711 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.596717 | orchestrator | 2026-03-02 01:14:26.596723 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-02 01:14:26.596729 | orchestrator | Monday 02 March 2026 01:12:56 +0000 (0:00:00.744) 0:06:11.922 ********** 2026-03-02 01:14:26.596734 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.596740 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.596750 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.596756 | orchestrator | 2026-03-02 01:14:26.596761 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-02 01:14:26.596767 | orchestrator | Monday 02 March 2026 01:12:56 +0000 (0:00:00.667) 0:06:12.590 ********** 2026-03-02 01:14:26.596773 | orchestrator | changed: [testbed-node-3] 2026-03-02 01:14:26.596779 | orchestrator | changed: [testbed-node-5] 2026-03-02 01:14:26.596784 | orchestrator | changed: [testbed-node-4] 2026-03-02 01:14:26.596790 | orchestrator | 2026-03-02 01:14:26.596795 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-02 01:14:26.596802 | orchestrator | Monday 02 March 2026 01:13:14 +0000 (0:00:18.044) 0:06:30.634 ********** 2026-03-02 01:14:26.596807 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.596813 | orchestrator | 2026-03-02 01:14:26.596819 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-02 01:14:26.596824 | orchestrator | Monday 02 March 2026 01:13:14 +0000 (0:00:00.099) 0:06:30.733 ********** 2026-03-02 01:14:26.596830 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.596836 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.596842 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.596848 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.596854 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.596861 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-02 01:14:26.596868 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-02 01:14:26.596874 | orchestrator | 2026-03-02 01:14:26.596881 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-02 01:14:26.596885 | orchestrator | Monday 02 March 2026 01:13:35 +0000 (0:00:21.125) 0:06:51.858 ********** 2026-03-02 01:14:26.596889 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.596892 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.596896 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.596900 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.596904 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.596907 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.596911 | orchestrator | 2026-03-02 01:14:26.596917 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-02 01:14:26.596921 | orchestrator | Monday 02 March 2026 01:13:45 +0000 (0:00:09.581) 0:07:01.440 ********** 2026-03-02 01:14:26.596925 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.596929 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.596932 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.596936 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.596940 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.596943 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-02 01:14:26.596947 | orchestrator | 2026-03-02 01:14:26.596951 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-02 01:14:26.596955 | orchestrator | Monday 02 March 2026 01:13:49 +0000 (0:00:03.451) 0:07:04.891 ********** 2026-03-02 01:14:26.596958 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-02 01:14:26.596962 | orchestrator | 2026-03-02 01:14:26.596966 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-02 01:14:26.596969 | orchestrator | Monday 02 March 2026 01:14:03 +0000 (0:00:14.144) 0:07:19.035 ********** 2026-03-02 01:14:26.596975 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-02 01:14:26.596979 | orchestrator | 2026-03-02 01:14:26.596983 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-02 01:14:26.596987 | orchestrator | Monday 02 March 2026 01:14:04 +0000 (0:00:01.231) 0:07:20.267 ********** 2026-03-02 01:14:26.596990 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.596994 | orchestrator | 2026-03-02 01:14:26.596998 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-02 01:14:26.597005 | orchestrator | Monday 02 March 2026 01:14:05 +0000 (0:00:01.307) 0:07:21.574 ********** 2026-03-02 01:14:26.597009 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-02 01:14:26.597012 | orchestrator | 2026-03-02 01:14:26.597016 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-02 01:14:26.597020 | orchestrator | Monday 02 March 2026 01:14:17 +0000 (0:00:12.179) 0:07:33.754 ********** 2026-03-02 01:14:26.597023 | orchestrator | ok: [testbed-node-3] 2026-03-02 01:14:26.597028 | orchestrator | ok: [testbed-node-4] 2026-03-02 01:14:26.597031 | orchestrator | ok: [testbed-node-5] 2026-03-02 01:14:26.597035 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:14:26.597039 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:14:26.597042 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:14:26.597046 | orchestrator | 2026-03-02 01:14:26.597050 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-02 01:14:26.597053 | orchestrator | 2026-03-02 01:14:26.597057 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-02 01:14:26.597061 | orchestrator | Monday 02 March 2026 01:14:19 +0000 (0:00:01.969) 0:07:35.723 ********** 2026-03-02 01:14:26.597064 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:14:26.597068 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:14:26.597072 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:14:26.597076 | orchestrator | 2026-03-02 01:14:26.597079 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-02 01:14:26.597083 | orchestrator | 2026-03-02 01:14:26.597087 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-02 01:14:26.597090 | orchestrator | Monday 02 March 2026 01:14:21 +0000 (0:00:01.321) 0:07:37.045 ********** 2026-03-02 01:14:26.597094 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.597098 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.597101 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.597105 | orchestrator | 2026-03-02 01:14:26.597109 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-02 01:14:26.597112 | orchestrator | 2026-03-02 01:14:26.597117 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-02 01:14:26.597124 | orchestrator | Monday 02 March 2026 01:14:21 +0000 (0:00:00.503) 0:07:37.548 ********** 2026-03-02 01:14:26.597130 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-02 01:14:26.597136 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-02 01:14:26.597142 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-02 01:14:26.597148 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-02 01:14:26.597153 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-02 01:14:26.597158 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-02 01:14:26.597164 | orchestrator | skipping: [testbed-node-3] 2026-03-02 01:14:26.597169 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-02 01:14:26.597178 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-02 01:14:26.597183 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-02 01:14:26.597189 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-02 01:14:26.597195 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-02 01:14:26.597200 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-02 01:14:26.597207 | orchestrator | skipping: [testbed-node-4] 2026-03-02 01:14:26.597213 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-02 01:14:26.597218 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-02 01:14:26.597224 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-02 01:14:26.597230 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-02 01:14:26.597240 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-02 01:14:26.597247 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-02 01:14:26.597252 | orchestrator | skipping: [testbed-node-5] 2026-03-02 01:14:26.597261 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-02 01:14:26.597267 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-02 01:14:26.597275 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-02 01:14:26.597282 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-02 01:14:26.597288 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-02 01:14:26.597294 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-02 01:14:26.597301 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-02 01:14:26.597307 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-02 01:14:26.597313 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-02 01:14:26.597320 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-02 01:14:26.597325 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-02 01:14:26.597329 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-02 01:14:26.597333 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.597336 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.597340 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-02 01:14:26.597344 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-02 01:14:26.597350 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-02 01:14:26.597354 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-02 01:14:26.597358 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-02 01:14:26.597362 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-02 01:14:26.597365 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.597369 | orchestrator | 2026-03-02 01:14:26.597373 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-02 01:14:26.597377 | orchestrator | 2026-03-02 01:14:26.597381 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-02 01:14:26.597384 | orchestrator | Monday 02 March 2026 01:14:22 +0000 (0:00:01.281) 0:07:38.829 ********** 2026-03-02 01:14:26.597388 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-02 01:14:26.597392 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-02 01:14:26.597396 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.597400 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-02 01:14:26.597407 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-02 01:14:26.597413 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.597419 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-02 01:14:26.597428 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-02 01:14:26.597435 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.597441 | orchestrator | 2026-03-02 01:14:26.597463 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-02 01:14:26.597469 | orchestrator | 2026-03-02 01:14:26.597475 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-02 01:14:26.597481 | orchestrator | Monday 02 March 2026 01:14:23 +0000 (0:00:00.776) 0:07:39.606 ********** 2026-03-02 01:14:26.597488 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.597493 | orchestrator | 2026-03-02 01:14:26.597499 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-02 01:14:26.597505 | orchestrator | 2026-03-02 01:14:26.597511 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-02 01:14:26.597517 | orchestrator | Monday 02 March 2026 01:14:24 +0000 (0:00:00.673) 0:07:40.280 ********** 2026-03-02 01:14:26.597535 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:14:26.597541 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:14:26.597548 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:14:26.597554 | orchestrator | 2026-03-02 01:14:26.597561 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:14:26.597567 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:14:26.597573 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-03-02 01:14:26.597577 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-02 01:14:26.597581 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-02 01:14:26.597585 | orchestrator | testbed-node-3 : ok=40  changed=27  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-02 01:14:26.597589 | orchestrator | testbed-node-4 : ok=44  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-02 01:14:26.597592 | orchestrator | testbed-node-5 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-02 01:14:26.597596 | orchestrator | 2026-03-02 01:14:26.597600 | orchestrator | 2026-03-02 01:14:26.597604 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:14:26.597607 | orchestrator | Monday 02 March 2026 01:14:25 +0000 (0:00:00.607) 0:07:40.887 ********** 2026-03-02 01:14:26.597611 | orchestrator | =============================================================================== 2026-03-02 01:14:26.597615 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 28.22s 2026-03-02 01:14:26.597619 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 27.06s 2026-03-02 01:14:26.597626 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.13s 2026-03-02 01:14:26.597630 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.87s 2026-03-02 01:14:26.597633 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 18.04s 2026-03-02 01:14:26.597637 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.55s 2026-03-02 01:14:26.597641 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.07s 2026-03-02 01:14:26.597644 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.91s 2026-03-02 01:14:26.597648 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 15.15s 2026-03-02 01:14:26.597652 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.05s 2026-03-02 01:14:26.597656 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.14s 2026-03-02 01:14:26.597659 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.12s 2026-03-02 01:14:26.597666 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.18s 2026-03-02 01:14:26.597670 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.03s 2026-03-02 01:14:26.597673 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.74s 2026-03-02 01:14:26.597677 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.68s 2026-03-02 01:14:26.597681 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.58s 2026-03-02 01:14:26.597685 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 6.97s 2026-03-02 01:14:26.597691 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 6.93s 2026-03-02 01:14:26.597695 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 6.42s 2026-03-02 01:14:26.597701 | orchestrator | 2026-03-02 01:14:26 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:26.597705 | orchestrator | 2026-03-02 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:29.636790 | orchestrator | 2026-03-02 01:14:29 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:29.636844 | orchestrator | 2026-03-02 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:32.678431 | orchestrator | 2026-03-02 01:14:32 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:32.678619 | orchestrator | 2026-03-02 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:35.722841 | orchestrator | 2026-03-02 01:14:35 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:35.722917 | orchestrator | 2026-03-02 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:38.767148 | orchestrator | 2026-03-02 01:14:38 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:38.767209 | orchestrator | 2026-03-02 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:41.799305 | orchestrator | 2026-03-02 01:14:41 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:41.799366 | orchestrator | 2026-03-02 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:44.839317 | orchestrator | 2026-03-02 01:14:44 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:44.839367 | orchestrator | 2026-03-02 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:47.893898 | orchestrator | 2026-03-02 01:14:47 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:47.893984 | orchestrator | 2026-03-02 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:50.932799 | orchestrator | 2026-03-02 01:14:50 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:50.932849 | orchestrator | 2026-03-02 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:53.979448 | orchestrator | 2026-03-02 01:14:53 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:53.979542 | orchestrator | 2026-03-02 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:14:57.017530 | orchestrator | 2026-03-02 01:14:57 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:14:57.017655 | orchestrator | 2026-03-02 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:15:00.059229 | orchestrator | 2026-03-02 01:15:00 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:15:00.059288 | orchestrator | 2026-03-02 01:15:00 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:15:03.108529 | orchestrator | 2026-03-02 01:15:03 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state STARTED 2026-03-02 01:15:03.108591 | orchestrator | 2026-03-02 01:15:03 | INFO  | Wait 1 second(s) until the next check 2026-03-02 01:15:06.158828 | orchestrator | 2026-03-02 01:15:06 | INFO  | Task 217cf079-c201-4f32-880c-aef31ffe4d15 is in state SUCCESS 2026-03-02 01:15:06.160364 | orchestrator | 2026-03-02 01:15:06.160456 | orchestrator | 2026-03-02 01:15:06.160468 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-02 01:15:06.160503 | orchestrator | 2026-03-02 01:15:06.160509 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-02 01:15:06.160514 | orchestrator | Monday 02 March 2026 01:10:33 +0000 (0:00:00.228) 0:00:00.228 ********** 2026-03-02 01:15:06.160520 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:15:06.160526 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:15:06.160532 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:15:06.160537 | orchestrator | 2026-03-02 01:15:06.160543 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-02 01:15:06.160549 | orchestrator | Monday 02 March 2026 01:10:34 +0000 (0:00:00.249) 0:00:00.477 ********** 2026-03-02 01:15:06.160555 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-02 01:15:06.160571 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-02 01:15:06.160577 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-02 01:15:06.160628 | orchestrator | 2026-03-02 01:15:06.160674 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-02 01:15:06.160678 | orchestrator | 2026-03-02 01:15:06.160681 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-02 01:15:06.160685 | orchestrator | Monday 02 March 2026 01:10:34 +0000 (0:00:00.357) 0:00:00.835 ********** 2026-03-02 01:15:06.160688 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:15:06.160693 | orchestrator | 2026-03-02 01:15:06.160696 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-02 01:15:06.160699 | orchestrator | Monday 02 March 2026 01:10:34 +0000 (0:00:00.476) 0:00:01.311 ********** 2026-03-02 01:15:06.160703 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-02 01:15:06.160706 | orchestrator | 2026-03-02 01:15:06.160710 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-02 01:15:06.160713 | orchestrator | Monday 02 March 2026 01:10:38 +0000 (0:00:04.000) 0:00:05.312 ********** 2026-03-02 01:15:06.160716 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-02 01:15:06.160720 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-02 01:15:06.160723 | orchestrator | 2026-03-02 01:15:06.160727 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-02 01:15:06.160730 | orchestrator | Monday 02 March 2026 01:10:45 +0000 (0:00:06.311) 0:00:11.623 ********** 2026-03-02 01:15:06.160746 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-02 01:15:06.160749 | orchestrator | 2026-03-02 01:15:06.160753 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-02 01:15:06.160756 | orchestrator | Monday 02 March 2026 01:10:48 +0000 (0:00:02.918) 0:00:14.542 ********** 2026-03-02 01:15:06.160796 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-02 01:15:06.160984 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-02 01:15:06.160990 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-02 01:15:06.160993 | orchestrator | 2026-03-02 01:15:06.160997 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-02 01:15:06.161000 | orchestrator | Monday 02 March 2026 01:10:55 +0000 (0:00:07.208) 0:00:21.750 ********** 2026-03-02 01:15:06.161004 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-02 01:15:06.161007 | orchestrator | 2026-03-02 01:15:06.161011 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-02 01:15:06.161014 | orchestrator | Monday 02 March 2026 01:10:58 +0000 (0:00:03.377) 0:00:25.128 ********** 2026-03-02 01:15:06.161017 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-02 01:15:06.161021 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-02 01:15:06.161024 | orchestrator | 2026-03-02 01:15:06.161028 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-02 01:15:06.161038 | orchestrator | Monday 02 March 2026 01:11:05 +0000 (0:00:07.164) 0:00:32.292 ********** 2026-03-02 01:15:06.161041 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-02 01:15:06.161045 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-02 01:15:06.161048 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-02 01:15:06.161051 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-02 01:15:06.161055 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-02 01:15:06.161058 | orchestrator | 2026-03-02 01:15:06.161061 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-02 01:15:06.161065 | orchestrator | Monday 02 March 2026 01:11:19 +0000 (0:00:13.992) 0:00:46.284 ********** 2026-03-02 01:15:06.161068 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:15:06.161072 | orchestrator | 2026-03-02 01:15:06.161075 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-02 01:15:06.161078 | orchestrator | Monday 02 March 2026 01:11:20 +0000 (0:00:00.619) 0:00:46.904 ********** 2026-03-02 01:15:06.161081 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.161085 | orchestrator | 2026-03-02 01:15:06.161088 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-02 01:15:06.161091 | orchestrator | Monday 02 March 2026 01:11:25 +0000 (0:00:05.212) 0:00:52.117 ********** 2026-03-02 01:15:06.161095 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.161098 | orchestrator | 2026-03-02 01:15:06.161101 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-02 01:15:06.161113 | orchestrator | Monday 02 March 2026 01:11:29 +0000 (0:00:03.905) 0:00:56.022 ********** 2026-03-02 01:15:06.161117 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:15:06.161120 | orchestrator | 2026-03-02 01:15:06.161124 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-02 01:15:06.161127 | orchestrator | Monday 02 March 2026 01:11:32 +0000 (0:00:02.863) 0:00:58.886 ********** 2026-03-02 01:15:06.161130 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-02 01:15:06.161133 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-02 01:15:06.161137 | orchestrator | 2026-03-02 01:15:06.161140 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-02 01:15:06.161143 | orchestrator | Monday 02 March 2026 01:11:41 +0000 (0:00:08.928) 0:01:07.814 ********** 2026-03-02 01:15:06.161151 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-02 01:15:06.161155 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-02 01:15:06.161159 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-02 01:15:06.161163 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-02 01:15:06.161166 | orchestrator | 2026-03-02 01:15:06.161170 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-02 01:15:06.161173 | orchestrator | Monday 02 March 2026 01:11:57 +0000 (0:00:16.194) 0:01:24.009 ********** 2026-03-02 01:15:06.161176 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.161180 | orchestrator | 2026-03-02 01:15:06.161183 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-02 01:15:06.161186 | orchestrator | Monday 02 March 2026 01:12:03 +0000 (0:00:05.741) 0:01:29.750 ********** 2026-03-02 01:15:06.161190 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.161193 | orchestrator | 2026-03-02 01:15:06.161196 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-02 01:15:06.161202 | orchestrator | Monday 02 March 2026 01:12:08 +0000 (0:00:04.716) 0:01:34.467 ********** 2026-03-02 01:15:06.161205 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:15:06.161209 | orchestrator | 2026-03-02 01:15:06.161212 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-02 01:15:06.161215 | orchestrator | Monday 02 March 2026 01:12:08 +0000 (0:00:00.199) 0:01:34.667 ********** 2026-03-02 01:15:06.161219 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:15:06.161222 | orchestrator | 2026-03-02 01:15:06.161225 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-02 01:15:06.161229 | orchestrator | Monday 02 March 2026 01:12:11 +0000 (0:00:03.092) 0:01:37.759 ********** 2026-03-02 01:15:06.161232 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:15:06.161235 | orchestrator | 2026-03-02 01:15:06.161239 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-02 01:15:06.161242 | orchestrator | Monday 02 March 2026 01:12:12 +0000 (0:00:00.959) 0:01:38.718 ********** 2026-03-02 01:15:06.161245 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.161248 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.161252 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.161255 | orchestrator | 2026-03-02 01:15:06.161266 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-02 01:15:06.161274 | orchestrator | Monday 02 March 2026 01:12:17 +0000 (0:00:04.819) 0:01:43.538 ********** 2026-03-02 01:15:06.161277 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.161281 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.161284 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.161287 | orchestrator | 2026-03-02 01:15:06.161291 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-02 01:15:06.161294 | orchestrator | Monday 02 March 2026 01:12:21 +0000 (0:00:04.031) 0:01:47.570 ********** 2026-03-02 01:15:06.161297 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.161301 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.161304 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.161307 | orchestrator | 2026-03-02 01:15:06.161312 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-02 01:15:06.161320 | orchestrator | Monday 02 March 2026 01:12:21 +0000 (0:00:00.698) 0:01:48.268 ********** 2026-03-02 01:15:06.161327 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:15:06.161332 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:15:06.161338 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:15:06.161343 | orchestrator | 2026-03-02 01:15:06.161348 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-02 01:15:06.161354 | orchestrator | Monday 02 March 2026 01:12:23 +0000 (0:00:01.948) 0:01:50.216 ********** 2026-03-02 01:15:06.161360 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.161366 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.161372 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.161378 | orchestrator | 2026-03-02 01:15:06.161384 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-02 01:15:06.161390 | orchestrator | Monday 02 March 2026 01:12:25 +0000 (0:00:01.245) 0:01:51.462 ********** 2026-03-02 01:15:06.161396 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.161401 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.161407 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.161410 | orchestrator | 2026-03-02 01:15:06.161414 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-02 01:15:06.161417 | orchestrator | Monday 02 March 2026 01:12:26 +0000 (0:00:00.983) 0:01:52.446 ********** 2026-03-02 01:15:06.161420 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.161423 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.161427 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.161430 | orchestrator | 2026-03-02 01:15:06.161442 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-02 01:15:06.161446 | orchestrator | Monday 02 March 2026 01:12:27 +0000 (0:00:01.806) 0:01:54.252 ********** 2026-03-02 01:15:06.161449 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.161453 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.161456 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.161461 | orchestrator | 2026-03-02 01:15:06.161466 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-02 01:15:06.161472 | orchestrator | Monday 02 March 2026 01:12:29 +0000 (0:00:01.747) 0:01:55.999 ********** 2026-03-02 01:15:06.161532 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:15:06.161539 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:15:06.161545 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:15:06.161551 | orchestrator | 2026-03-02 01:15:06.161558 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-02 01:15:06.161570 | orchestrator | Monday 02 March 2026 01:12:30 +0000 (0:00:00.557) 0:01:56.557 ********** 2026-03-02 01:15:06.161576 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:15:06.161582 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:15:06.161587 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:15:06.161593 | orchestrator | 2026-03-02 01:15:06.161598 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-02 01:15:06.161603 | orchestrator | Monday 02 March 2026 01:12:32 +0000 (0:00:02.421) 0:01:58.979 ********** 2026-03-02 01:15:06.161609 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:15:06.161615 | orchestrator | 2026-03-02 01:15:06.161622 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-02 01:15:06.161629 | orchestrator | Monday 02 March 2026 01:12:33 +0000 (0:00:00.600) 0:01:59.579 ********** 2026-03-02 01:15:06.161635 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:15:06.161641 | orchestrator | 2026-03-02 01:15:06.161646 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-02 01:15:06.161649 | orchestrator | Monday 02 March 2026 01:12:37 +0000 (0:00:03.889) 0:02:03.469 ********** 2026-03-02 01:15:06.161654 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:15:06.161658 | orchestrator | 2026-03-02 01:15:06.161662 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-02 01:15:06.161666 | orchestrator | Monday 02 March 2026 01:12:40 +0000 (0:00:03.136) 0:02:06.606 ********** 2026-03-02 01:15:06.161670 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-02 01:15:06.161674 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-02 01:15:06.161678 | orchestrator | 2026-03-02 01:15:06.161682 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-02 01:15:06.161686 | orchestrator | Monday 02 March 2026 01:12:46 +0000 (0:00:05.941) 0:02:12.547 ********** 2026-03-02 01:15:06.161690 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:15:06.161861 | orchestrator | 2026-03-02 01:15:06.161873 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-02 01:15:06.161879 | orchestrator | Monday 02 March 2026 01:12:49 +0000 (0:00:03.364) 0:02:15.912 ********** 2026-03-02 01:15:06.161885 | orchestrator | ok: [testbed-node-0] 2026-03-02 01:15:06.161892 | orchestrator | ok: [testbed-node-1] 2026-03-02 01:15:06.161897 | orchestrator | ok: [testbed-node-2] 2026-03-02 01:15:06.161903 | orchestrator | 2026-03-02 01:15:06.161909 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-02 01:15:06.161914 | orchestrator | Monday 02 March 2026 01:12:49 +0000 (0:00:00.329) 0:02:16.242 ********** 2026-03-02 01:15:06.161919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.161947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.161957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.161964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.161970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.161976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.161986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.161993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162087 | orchestrator | 2026-03-02 01:15:06.162090 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-02 01:15:06.162094 | orchestrator | Monday 02 March 2026 01:12:52 +0000 (0:00:02.371) 0:02:18.613 ********** 2026-03-02 01:15:06.162097 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:15:06.162101 | orchestrator | 2026-03-02 01:15:06.162118 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-02 01:15:06.162122 | orchestrator | Monday 02 March 2026 01:12:52 +0000 (0:00:00.137) 0:02:18.751 ********** 2026-03-02 01:15:06.162126 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:15:06.162129 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:15:06.162132 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:15:06.162135 | orchestrator | 2026-03-02 01:15:06.162140 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-02 01:15:06.162146 | orchestrator | Monday 02 March 2026 01:12:52 +0000 (0:00:00.488) 0:02:19.240 ********** 2026-03-02 01:15:06.162154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 01:15:06.162161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 01:15:06.162173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 01:15:06.162179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 01:15:06.162185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:15:06.162230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:15:06.162236 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:15:06.162242 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:15:06.162248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 01:15:06.162270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 01:15:06.162278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:15:06.162292 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:15:06.162295 | orchestrator | 2026-03-02 01:15:06.162299 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-02 01:15:06.162302 | orchestrator | Monday 02 March 2026 01:12:53 +0000 (0:00:00.679) 0:02:19.919 ********** 2026-03-02 01:15:06.162306 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-02 01:15:06.162310 | orchestrator | 2026-03-02 01:15:06.162316 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-02 01:15:06.162321 | orchestrator | Monday 02 March 2026 01:12:54 +0000 (0:00:00.532) 0:02:20.451 ********** 2026-03-02 01:15:06.162327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.162344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.162351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.162360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.162367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.162373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.162378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162433 | orchestrator | 2026-03-02 01:15:06.162437 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-02 01:15:06.162440 | orchestrator | Monday 02 March 2026 01:12:59 +0000 (0:00:04.976) 0:02:25.428 ********** 2026-03-02 01:15:06.162445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 01:15:06.162451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 01:15:06.162455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:15:06.162468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 01:15:06.162472 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:15:06.162490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 01:15:06.162502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:15:06.162515 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:15:06.162519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 01:15:06.162523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 01:15:06.162532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:15:06.162549 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:15:06.162557 | orchestrator | 2026-03-02 01:15:06.162564 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-02 01:15:06.162570 | orchestrator | Monday 02 March 2026 01:13:00 +0000 (0:00:00.980) 0:02:26.408 ********** 2026-03-02 01:15:06.162575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 01:15:06.162581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 01:15:06.162587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:15:06.162636 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:15:06.162640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 01:15:06.162644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 01:15:06.162648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:15:06.162673 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:15:06.162682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-02 01:15:06.162688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-02 01:15:06.162694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-02 01:15:06.162705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-02 01:15:06.162715 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:15:06.162721 | orchestrator | 2026-03-02 01:15:06.162727 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-02 01:15:06.162732 | orchestrator | Monday 02 March 2026 01:13:01 +0000 (0:00:01.209) 0:02:27.618 ********** 2026-03-02 01:15:06.162740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.162745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.162751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.162757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.162764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.162781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.162794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162841 | orchestrator | 2026-03-02 01:15:06.162845 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-02 01:15:06.162848 | orchestrator | Monday 02 March 2026 01:13:06 +0000 (0:00:05.002) 0:02:32.620 ********** 2026-03-02 01:15:06.162851 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-02 01:15:06.162855 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-02 01:15:06.162859 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-02 01:15:06.162862 | orchestrator | 2026-03-02 01:15:06.162865 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-02 01:15:06.162869 | orchestrator | Monday 02 March 2026 01:13:07 +0000 (0:00:01.629) 0:02:34.249 ********** 2026-03-02 01:15:06.162872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.162878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.162886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.162890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.162893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.162897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.162900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.162963 | orchestrator | 2026-03-02 01:15:06.162969 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-02 01:15:06.162975 | orchestrator | Monday 02 March 2026 01:13:24 +0000 (0:00:16.878) 0:02:51.128 ********** 2026-03-02 01:15:06.162981 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.162985 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.162988 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.162992 | orchestrator | 2026-03-02 01:15:06.162995 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-02 01:15:06.162998 | orchestrator | Monday 02 March 2026 01:13:26 +0000 (0:00:01.324) 0:02:52.452 ********** 2026-03-02 01:15:06.163002 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-02 01:15:06.163005 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-02 01:15:06.163011 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-02 01:15:06.163015 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-02 01:15:06.163018 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-02 01:15:06.163021 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-02 01:15:06.163025 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-02 01:15:06.163028 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-02 01:15:06.163031 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-02 01:15:06.163035 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-02 01:15:06.163038 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-02 01:15:06.163043 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-02 01:15:06.163047 | orchestrator | 2026-03-02 01:15:06.163050 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-02 01:15:06.163054 | orchestrator | Monday 02 March 2026 01:13:31 +0000 (0:00:05.216) 0:02:57.669 ********** 2026-03-02 01:15:06.163057 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-02 01:15:06.163060 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-02 01:15:06.163064 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-02 01:15:06.163067 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-02 01:15:06.163070 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-02 01:15:06.163073 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-02 01:15:06.163077 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-02 01:15:06.163080 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-02 01:15:06.163083 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-02 01:15:06.163087 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-02 01:15:06.163094 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-02 01:15:06.163098 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-02 01:15:06.163101 | orchestrator | 2026-03-02 01:15:06.163104 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-02 01:15:06.163108 | orchestrator | Monday 02 March 2026 01:13:36 +0000 (0:00:05.662) 0:03:03.331 ********** 2026-03-02 01:15:06.163111 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-02 01:15:06.163114 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-02 01:15:06.163118 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-02 01:15:06.163121 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-02 01:15:06.163124 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-02 01:15:06.163128 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-02 01:15:06.163131 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-02 01:15:06.163135 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-02 01:15:06.163138 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-02 01:15:06.163141 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-02 01:15:06.163145 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-02 01:15:06.163148 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-02 01:15:06.163151 | orchestrator | 2026-03-02 01:15:06.163154 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-02 01:15:06.163158 | orchestrator | Monday 02 March 2026 01:13:43 +0000 (0:00:06.672) 0:03:10.004 ********** 2026-03-02 01:15:06.163161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.163168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.163174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-02 01:15:06.163180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.163184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.163188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-02 01:15:06.163191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.163197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.163202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.163209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.163213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.163216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-02 01:15:06.163220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.163223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.163230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-02 01:15:06.163234 | orchestrator | 2026-03-02 01:15:06.163238 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-02 01:15:06.163241 | orchestrator | Monday 02 March 2026 01:13:47 +0000 (0:00:03.994) 0:03:13.998 ********** 2026-03-02 01:15:06.163247 | orchestrator | skipping: [testbed-node-0] 2026-03-02 01:15:06.163250 | orchestrator | skipping: [testbed-node-1] 2026-03-02 01:15:06.163253 | orchestrator | skipping: [testbed-node-2] 2026-03-02 01:15:06.163257 | orchestrator | 2026-03-02 01:15:06.163260 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-02 01:15:06.163266 | orchestrator | Monday 02 March 2026 01:13:47 +0000 (0:00:00.282) 0:03:14.281 ********** 2026-03-02 01:15:06.163269 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.163272 | orchestrator | 2026-03-02 01:15:06.163276 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-02 01:15:06.163279 | orchestrator | Monday 02 March 2026 01:13:49 +0000 (0:00:01.948) 0:03:16.230 ********** 2026-03-02 01:15:06.163282 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.163286 | orchestrator | 2026-03-02 01:15:06.163289 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-02 01:15:06.163292 | orchestrator | Monday 02 March 2026 01:13:51 +0000 (0:00:02.155) 0:03:18.385 ********** 2026-03-02 01:15:06.163296 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.163299 | orchestrator | 2026-03-02 01:15:06.163302 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-02 01:15:06.163306 | orchestrator | Monday 02 March 2026 01:13:54 +0000 (0:00:02.313) 0:03:20.699 ********** 2026-03-02 01:15:06.163309 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.163312 | orchestrator | 2026-03-02 01:15:06.163316 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-02 01:15:06.163319 | orchestrator | Monday 02 March 2026 01:13:56 +0000 (0:00:02.553) 0:03:23.253 ********** 2026-03-02 01:15:06.163323 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.163326 | orchestrator | 2026-03-02 01:15:06.163329 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-02 01:15:06.163333 | orchestrator | Monday 02 March 2026 01:14:17 +0000 (0:00:21.079) 0:03:44.332 ********** 2026-03-02 01:15:06.163336 | orchestrator | 2026-03-02 01:15:06.163339 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-02 01:15:06.163342 | orchestrator | Monday 02 March 2026 01:14:18 +0000 (0:00:00.069) 0:03:44.402 ********** 2026-03-02 01:15:06.163357 | orchestrator | 2026-03-02 01:15:06.163361 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-02 01:15:06.163364 | orchestrator | Monday 02 March 2026 01:14:18 +0000 (0:00:00.064) 0:03:44.467 ********** 2026-03-02 01:15:06.163367 | orchestrator | 2026-03-02 01:15:06.163371 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-02 01:15:06.163374 | orchestrator | Monday 02 March 2026 01:14:18 +0000 (0:00:00.072) 0:03:44.539 ********** 2026-03-02 01:15:06.163377 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.163381 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.163384 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.163388 | orchestrator | 2026-03-02 01:15:06.163391 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-02 01:15:06.163394 | orchestrator | Monday 02 March 2026 01:14:32 +0000 (0:00:14.485) 0:03:59.025 ********** 2026-03-02 01:15:06.163398 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.163401 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.163404 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.163407 | orchestrator | 2026-03-02 01:15:06.163411 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-02 01:15:06.163414 | orchestrator | Monday 02 March 2026 01:14:43 +0000 (0:00:11.225) 0:04:10.251 ********** 2026-03-02 01:15:06.163418 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.163421 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.163424 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.163428 | orchestrator | 2026-03-02 01:15:06.163431 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-02 01:15:06.163434 | orchestrator | Monday 02 March 2026 01:14:52 +0000 (0:00:08.627) 0:04:18.878 ********** 2026-03-02 01:15:06.163440 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.163444 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.163447 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.163450 | orchestrator | 2026-03-02 01:15:06.163454 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-02 01:15:06.163460 | orchestrator | Monday 02 March 2026 01:15:00 +0000 (0:00:08.218) 0:04:27.097 ********** 2026-03-02 01:15:06.163466 | orchestrator | changed: [testbed-node-0] 2026-03-02 01:15:06.163473 | orchestrator | changed: [testbed-node-2] 2026-03-02 01:15:06.163495 | orchestrator | changed: [testbed-node-1] 2026-03-02 01:15:06.163501 | orchestrator | 2026-03-02 01:15:06.163507 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:15:06.163512 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-02 01:15:06.163518 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-02 01:15:06.163524 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-02 01:15:06.163529 | orchestrator | 2026-03-02 01:15:06.163535 | orchestrator | 2026-03-02 01:15:06.163541 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:15:06.163546 | orchestrator | Monday 02 March 2026 01:15:05 +0000 (0:00:04.725) 0:04:31.823 ********** 2026-03-02 01:15:06.163556 | orchestrator | =============================================================================== 2026-03-02 01:15:06.163561 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.08s 2026-03-02 01:15:06.163567 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.88s 2026-03-02 01:15:06.163572 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.19s 2026-03-02 01:15:06.163577 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.49s 2026-03-02 01:15:06.163583 | orchestrator | octavia : Adding octavia related roles --------------------------------- 13.99s 2026-03-02 01:15:06.163592 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.23s 2026-03-02 01:15:06.163601 | orchestrator | octavia : Create security groups for octavia ---------------------------- 8.93s 2026-03-02 01:15:06.163606 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.63s 2026-03-02 01:15:06.163611 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.22s 2026-03-02 01:15:06.163616 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.21s 2026-03-02 01:15:06.163622 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.16s 2026-03-02 01:15:06.163627 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.67s 2026-03-02 01:15:06.163632 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.31s 2026-03-02 01:15:06.163637 | orchestrator | octavia : Get security groups for octavia ------------------------------- 5.94s 2026-03-02 01:15:06.163643 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.74s 2026-03-02 01:15:06.163649 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.66s 2026-03-02 01:15:06.163655 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.22s 2026-03-02 01:15:06.163660 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.21s 2026-03-02 01:15:06.163665 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.00s 2026-03-02 01:15:06.163670 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.98s 2026-03-02 01:15:06.163673 | orchestrator | 2026-03-02 01:15:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:09.196723 | orchestrator | 2026-03-02 01:15:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:12.238101 | orchestrator | 2026-03-02 01:15:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:15.280544 | orchestrator | 2026-03-02 01:15:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:18.320647 | orchestrator | 2026-03-02 01:15:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:21.358835 | orchestrator | 2026-03-02 01:15:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:24.398374 | orchestrator | 2026-03-02 01:15:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:27.432845 | orchestrator | 2026-03-02 01:15:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:30.471446 | orchestrator | 2026-03-02 01:15:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:33.516186 | orchestrator | 2026-03-02 01:15:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:36.561207 | orchestrator | 2026-03-02 01:15:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:39.602660 | orchestrator | 2026-03-02 01:15:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:42.647853 | orchestrator | 2026-03-02 01:15:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:45.679409 | orchestrator | 2026-03-02 01:15:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:48.724901 | orchestrator | 2026-03-02 01:15:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:51.772540 | orchestrator | 2026-03-02 01:15:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:54.813835 | orchestrator | 2026-03-02 01:15:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:15:57.860779 | orchestrator | 2026-03-02 01:15:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:16:00.904482 | orchestrator | 2026-03-02 01:16:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:16:03.947835 | orchestrator | 2026-03-02 01:16:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-02 01:16:06.988851 | orchestrator | 2026-03-02 01:16:07.339894 | orchestrator | 2026-03-02 01:16:07.345400 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Mar 2 01:16:07 UTC 2026 2026-03-02 01:16:07.345587 | orchestrator | 2026-03-02 01:16:07.693065 | orchestrator | ok: Runtime: 0:35:00.408016 2026-03-02 01:16:07.948317 | 2026-03-02 01:16:07.948458 | TASK [Bootstrap services] 2026-03-02 01:16:08.686916 | orchestrator | 2026-03-02 01:16:08.687001 | orchestrator | # BOOTSTRAP 2026-03-02 01:16:08.687010 | orchestrator | 2026-03-02 01:16:08.687015 | orchestrator | + set -e 2026-03-02 01:16:08.687020 | orchestrator | + echo 2026-03-02 01:16:08.687026 | orchestrator | + echo '# BOOTSTRAP' 2026-03-02 01:16:08.687032 | orchestrator | + echo 2026-03-02 01:16:08.687050 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-02 01:16:08.696458 | orchestrator | + set -e 2026-03-02 01:16:08.696532 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-02 01:16:13.285640 | orchestrator | 2026-03-02 01:16:13 | INFO  | It takes a moment until task 8d357686-f26c-4f39-9829-56e9f172c97d (flavor-manager) has been started and output is visible here. 2026-03-02 01:16:20.992586 | orchestrator | 2026-03-02 01:16:16 | INFO  | Flavor SCS-1L-1 created 2026-03-02 01:16:20.992688 | orchestrator | 2026-03-02 01:16:16 | INFO  | Flavor SCS-1L-1-5 created 2026-03-02 01:16:20.992704 | orchestrator | 2026-03-02 01:16:16 | INFO  | Flavor SCS-1V-2 created 2026-03-02 01:16:20.992715 | orchestrator | 2026-03-02 01:16:16 | INFO  | Flavor SCS-1V-2-5 created 2026-03-02 01:16:20.992725 | orchestrator | 2026-03-02 01:16:16 | INFO  | Flavor SCS-1V-4 created 2026-03-02 01:16:20.992736 | orchestrator | 2026-03-02 01:16:16 | INFO  | Flavor SCS-1V-4-10 created 2026-03-02 01:16:20.992746 | orchestrator | 2026-03-02 01:16:16 | INFO  | Flavor SCS-1V-8 created 2026-03-02 01:16:20.992771 | orchestrator | 2026-03-02 01:16:17 | INFO  | Flavor SCS-1V-8-20 created 2026-03-02 01:16:20.992801 | orchestrator | 2026-03-02 01:16:17 | INFO  | Flavor SCS-2V-4 created 2026-03-02 01:16:20.992813 | orchestrator | 2026-03-02 01:16:17 | INFO  | Flavor SCS-2V-4-10 created 2026-03-02 01:16:20.992823 | orchestrator | 2026-03-02 01:16:17 | INFO  | Flavor SCS-2V-8 created 2026-03-02 01:16:20.992833 | orchestrator | 2026-03-02 01:16:17 | INFO  | Flavor SCS-2V-8-20 created 2026-03-02 01:16:20.992843 | orchestrator | 2026-03-02 01:16:17 | INFO  | Flavor SCS-2V-16 created 2026-03-02 01:16:20.992853 | orchestrator | 2026-03-02 01:16:17 | INFO  | Flavor SCS-2V-16-50 created 2026-03-02 01:16:20.992863 | orchestrator | 2026-03-02 01:16:18 | INFO  | Flavor SCS-4V-8 created 2026-03-02 01:16:20.992873 | orchestrator | 2026-03-02 01:16:18 | INFO  | Flavor SCS-4V-8-20 created 2026-03-02 01:16:20.992883 | orchestrator | 2026-03-02 01:16:18 | INFO  | Flavor SCS-4V-16 created 2026-03-02 01:16:20.992893 | orchestrator | 2026-03-02 01:16:18 | INFO  | Flavor SCS-4V-16-50 created 2026-03-02 01:16:20.992903 | orchestrator | 2026-03-02 01:16:19 | INFO  | Flavor SCS-4V-32 created 2026-03-02 01:16:20.992913 | orchestrator | 2026-03-02 01:16:19 | INFO  | Flavor SCS-4V-32-100 created 2026-03-02 01:16:20.992923 | orchestrator | 2026-03-02 01:16:19 | INFO  | Flavor SCS-8V-16 created 2026-03-02 01:16:20.992933 | orchestrator | 2026-03-02 01:16:19 | INFO  | Flavor SCS-8V-16-50 created 2026-03-02 01:16:20.992943 | orchestrator | 2026-03-02 01:16:19 | INFO  | Flavor SCS-8V-32 created 2026-03-02 01:16:20.992953 | orchestrator | 2026-03-02 01:16:19 | INFO  | Flavor SCS-8V-32-100 created 2026-03-02 01:16:20.992963 | orchestrator | 2026-03-02 01:16:20 | INFO  | Flavor SCS-16V-32 created 2026-03-02 01:16:20.992973 | orchestrator | 2026-03-02 01:16:20 | INFO  | Flavor SCS-16V-32-100 created 2026-03-02 01:16:20.992983 | orchestrator | 2026-03-02 01:16:20 | INFO  | Flavor SCS-2V-4-20s created 2026-03-02 01:16:20.992993 | orchestrator | 2026-03-02 01:16:20 | INFO  | Flavor SCS-4V-8-50s created 2026-03-02 01:16:20.993003 | orchestrator | 2026-03-02 01:16:20 | INFO  | Flavor SCS-4V-16-100s created 2026-03-02 01:16:20.993013 | orchestrator | 2026-03-02 01:16:20 | INFO  | Flavor SCS-8V-32-100s created 2026-03-02 01:16:22.942759 | orchestrator | 2026-03-02 01:16:22 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-02 01:16:33.038896 | orchestrator | 2026-03-02 01:16:33 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-02 01:16:33.114091 | orchestrator | 2026-03-02 01:16:33 | INFO  | Task a678281c-12da-4809-aac8-9945a0a0a033 (bootstrap-basic) was prepared for execution. 2026-03-02 01:16:33.114158 | orchestrator | 2026-03-02 01:16:33 | INFO  | It takes a moment until task a678281c-12da-4809-aac8-9945a0a0a033 (bootstrap-basic) has been started and output is visible here. 2026-03-02 01:17:16.903998 | orchestrator | 2026-03-02 01:17:16.904094 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-02 01:17:16.904111 | orchestrator | 2026-03-02 01:17:16.904123 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-02 01:17:16.904134 | orchestrator | Monday 02 March 2026 01:16:37 +0000 (0:00:00.067) 0:00:00.067 ********** 2026-03-02 01:17:16.904145 | orchestrator | ok: [localhost] 2026-03-02 01:17:16.904156 | orchestrator | 2026-03-02 01:17:16.904166 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-02 01:17:16.904176 | orchestrator | Monday 02 March 2026 01:16:39 +0000 (0:00:01.900) 0:00:01.967 ********** 2026-03-02 01:17:16.904260 | orchestrator | ok: [localhost] 2026-03-02 01:17:16.904279 | orchestrator | 2026-03-02 01:17:16.904295 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-02 01:17:16.904311 | orchestrator | Monday 02 March 2026 01:16:47 +0000 (0:00:07.773) 0:00:09.741 ********** 2026-03-02 01:17:16.904327 | orchestrator | changed: [localhost] 2026-03-02 01:17:16.904344 | orchestrator | 2026-03-02 01:17:16.904360 | orchestrator | TASK [Create public network] *************************************************** 2026-03-02 01:17:16.904429 | orchestrator | Monday 02 March 2026 01:16:54 +0000 (0:00:07.300) 0:00:17.041 ********** 2026-03-02 01:17:16.904447 | orchestrator | changed: [localhost] 2026-03-02 01:17:16.904478 | orchestrator | 2026-03-02 01:17:16.904502 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-02 01:17:16.904514 | orchestrator | Monday 02 March 2026 01:16:59 +0000 (0:00:05.045) 0:00:22.087 ********** 2026-03-02 01:17:16.904524 | orchestrator | changed: [localhost] 2026-03-02 01:17:16.904610 | orchestrator | 2026-03-02 01:17:16.904632 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-02 01:17:16.904651 | orchestrator | Monday 02 March 2026 01:17:05 +0000 (0:00:05.984) 0:00:28.071 ********** 2026-03-02 01:17:16.904669 | orchestrator | changed: [localhost] 2026-03-02 01:17:16.904685 | orchestrator | 2026-03-02 01:17:16.904704 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-02 01:17:16.904722 | orchestrator | Monday 02 March 2026 01:17:09 +0000 (0:00:03.925) 0:00:31.996 ********** 2026-03-02 01:17:16.904741 | orchestrator | changed: [localhost] 2026-03-02 01:17:16.904759 | orchestrator | 2026-03-02 01:17:16.904776 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-02 01:17:16.904801 | orchestrator | Monday 02 March 2026 01:17:13 +0000 (0:00:03.856) 0:00:35.852 ********** 2026-03-02 01:17:16.904813 | orchestrator | ok: [localhost] 2026-03-02 01:17:16.904825 | orchestrator | 2026-03-02 01:17:16.904837 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-02 01:17:16.904849 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-02 01:17:16.904861 | orchestrator | 2026-03-02 01:17:16.904873 | orchestrator | 2026-03-02 01:17:16.904885 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-02 01:17:16.904903 | orchestrator | Monday 02 March 2026 01:17:16 +0000 (0:00:03.519) 0:00:39.372 ********** 2026-03-02 01:17:16.904924 | orchestrator | =============================================================================== 2026-03-02 01:17:16.904945 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.77s 2026-03-02 01:17:16.904985 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.30s 2026-03-02 01:17:16.905001 | orchestrator | Set public network to default ------------------------------------------- 5.98s 2026-03-02 01:17:16.905016 | orchestrator | Create public network --------------------------------------------------- 5.05s 2026-03-02 01:17:16.905033 | orchestrator | Create public subnet ---------------------------------------------------- 3.93s 2026-03-02 01:17:16.905050 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.86s 2026-03-02 01:17:16.905065 | orchestrator | Create manager role ----------------------------------------------------- 3.52s 2026-03-02 01:17:16.905082 | orchestrator | Gathering Facts --------------------------------------------------------- 1.90s 2026-03-02 01:17:19.318884 | orchestrator | 2026-03-02 01:17:19 | INFO  | It takes a moment until task 8c29024b-f56a-4e70-b12d-41ff04481f6e (image-manager) has been started and output is visible here. 2026-03-02 01:18:01.614716 | orchestrator | 2026-03-02 01:17:22 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-02 01:18:01.614812 | orchestrator | 2026-03-02 01:17:22 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-02 01:18:01.614825 | orchestrator | 2026-03-02 01:17:22 | INFO  | Importing image Cirros 0.6.2 2026-03-02 01:18:01.614834 | orchestrator | 2026-03-02 01:17:22 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-02 01:18:01.614842 | orchestrator | 2026-03-02 01:17:25 | INFO  | Waiting for image to leave queued state... 2026-03-02 01:18:01.614850 | orchestrator | 2026-03-02 01:17:27 | INFO  | Waiting for import to complete... 2026-03-02 01:18:01.614857 | orchestrator | 2026-03-02 01:17:37 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-02 01:18:01.614864 | orchestrator | 2026-03-02 01:17:37 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-02 01:18:01.614871 | orchestrator | 2026-03-02 01:17:37 | INFO  | Setting internal_version = 0.6.2 2026-03-02 01:18:01.614878 | orchestrator | 2026-03-02 01:17:37 | INFO  | Setting image_original_user = cirros 2026-03-02 01:18:01.614885 | orchestrator | 2026-03-02 01:17:37 | INFO  | Adding tag os:cirros 2026-03-02 01:18:01.614892 | orchestrator | 2026-03-02 01:17:38 | INFO  | Setting property architecture: x86_64 2026-03-02 01:18:01.614898 | orchestrator | 2026-03-02 01:17:38 | INFO  | Setting property hw_disk_bus: scsi 2026-03-02 01:18:01.614905 | orchestrator | 2026-03-02 01:17:38 | INFO  | Setting property hw_rng_model: virtio 2026-03-02 01:18:01.614912 | orchestrator | 2026-03-02 01:17:38 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-02 01:18:01.614918 | orchestrator | 2026-03-02 01:17:38 | INFO  | Setting property hw_watchdog_action: reset 2026-03-02 01:18:01.614925 | orchestrator | 2026-03-02 01:17:39 | INFO  | Setting property hypervisor_type: qemu 2026-03-02 01:18:01.614940 | orchestrator | 2026-03-02 01:17:39 | INFO  | Setting property os_distro: cirros 2026-03-02 01:18:01.614947 | orchestrator | 2026-03-02 01:17:39 | INFO  | Setting property os_purpose: minimal 2026-03-02 01:18:01.614953 | orchestrator | 2026-03-02 01:17:39 | INFO  | Setting property replace_frequency: never 2026-03-02 01:18:01.614960 | orchestrator | 2026-03-02 01:17:39 | INFO  | Setting property uuid_validity: none 2026-03-02 01:18:01.614967 | orchestrator | 2026-03-02 01:17:40 | INFO  | Setting property provided_until: none 2026-03-02 01:18:01.614973 | orchestrator | 2026-03-02 01:17:40 | INFO  | Setting property image_description: Cirros 2026-03-02 01:18:01.614980 | orchestrator | 2026-03-02 01:17:40 | INFO  | Setting property image_name: Cirros 2026-03-02 01:18:01.615004 | orchestrator | 2026-03-02 01:17:40 | INFO  | Setting property internal_version: 0.6.2 2026-03-02 01:18:01.615011 | orchestrator | 2026-03-02 01:17:40 | INFO  | Setting property image_original_user: cirros 2026-03-02 01:18:01.615017 | orchestrator | 2026-03-02 01:17:40 | INFO  | Setting property os_version: 0.6.2 2026-03-02 01:18:01.615025 | orchestrator | 2026-03-02 01:17:41 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-02 01:18:01.615033 | orchestrator | 2026-03-02 01:17:41 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-02 01:18:01.615039 | orchestrator | 2026-03-02 01:17:41 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-02 01:18:01.615046 | orchestrator | 2026-03-02 01:17:41 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-02 01:18:01.615056 | orchestrator | 2026-03-02 01:17:41 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-02 01:18:01.615064 | orchestrator | 2026-03-02 01:17:41 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-02 01:18:01.615070 | orchestrator | 2026-03-02 01:17:42 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-02 01:18:01.615077 | orchestrator | 2026-03-02 01:17:42 | INFO  | Importing image Cirros 0.6.3 2026-03-02 01:18:01.615084 | orchestrator | 2026-03-02 01:17:42 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-02 01:18:01.615091 | orchestrator | 2026-03-02 01:17:43 | INFO  | Waiting for image to leave queued state... 2026-03-02 01:18:01.615098 | orchestrator | 2026-03-02 01:17:45 | INFO  | Waiting for import to complete... 2026-03-02 01:18:01.615118 | orchestrator | 2026-03-02 01:17:55 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-02 01:18:01.615126 | orchestrator | 2026-03-02 01:17:55 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-02 01:18:01.615134 | orchestrator | 2026-03-02 01:17:55 | INFO  | Setting internal_version = 0.6.3 2026-03-02 01:18:01.615142 | orchestrator | 2026-03-02 01:17:55 | INFO  | Setting image_original_user = cirros 2026-03-02 01:18:01.615149 | orchestrator | 2026-03-02 01:17:55 | INFO  | Adding tag os:cirros 2026-03-02 01:18:01.615157 | orchestrator | 2026-03-02 01:17:56 | INFO  | Setting property architecture: x86_64 2026-03-02 01:18:01.615165 | orchestrator | 2026-03-02 01:17:56 | INFO  | Setting property hw_disk_bus: scsi 2026-03-02 01:18:01.615172 | orchestrator | 2026-03-02 01:17:56 | INFO  | Setting property hw_rng_model: virtio 2026-03-02 01:18:01.615180 | orchestrator | 2026-03-02 01:17:56 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-02 01:18:01.615188 | orchestrator | 2026-03-02 01:17:57 | INFO  | Setting property hw_watchdog_action: reset 2026-03-02 01:18:01.615196 | orchestrator | 2026-03-02 01:17:57 | INFO  | Setting property hypervisor_type: qemu 2026-03-02 01:18:01.615204 | orchestrator | 2026-03-02 01:17:57 | INFO  | Setting property os_distro: cirros 2026-03-02 01:18:01.615212 | orchestrator | 2026-03-02 01:17:57 | INFO  | Setting property os_purpose: minimal 2026-03-02 01:18:01.615221 | orchestrator | 2026-03-02 01:17:57 | INFO  | Setting property replace_frequency: never 2026-03-02 01:18:01.615231 | orchestrator | 2026-03-02 01:17:58 | INFO  | Setting property uuid_validity: none 2026-03-02 01:18:01.615242 | orchestrator | 2026-03-02 01:17:58 | INFO  | Setting property provided_until: none 2026-03-02 01:18:01.615253 | orchestrator | 2026-03-02 01:17:58 | INFO  | Setting property image_description: Cirros 2026-03-02 01:18:01.615273 | orchestrator | 2026-03-02 01:17:58 | INFO  | Setting property image_name: Cirros 2026-03-02 01:18:01.615285 | orchestrator | 2026-03-02 01:17:59 | INFO  | Setting property internal_version: 0.6.3 2026-03-02 01:18:01.615296 | orchestrator | 2026-03-02 01:17:59 | INFO  | Setting property image_original_user: cirros 2026-03-02 01:18:01.615308 | orchestrator | 2026-03-02 01:17:59 | INFO  | Setting property os_version: 0.6.3 2026-03-02 01:18:01.615317 | orchestrator | 2026-03-02 01:17:59 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-02 01:18:01.615325 | orchestrator | 2026-03-02 01:18:00 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-02 01:18:01.615333 | orchestrator | 2026-03-02 01:18:00 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-02 01:18:01.615341 | orchestrator | 2026-03-02 01:18:00 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-02 01:18:01.615349 | orchestrator | 2026-03-02 01:18:00 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-02 01:18:01.978984 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-02 01:18:04.171922 | orchestrator | 2026-03-02 01:18:04 | INFO  | date: 2026-03-01 2026-03-02 01:18:04.171986 | orchestrator | 2026-03-02 01:18:04 | INFO  | image: octavia-amphora-haproxy-2024.2.20260301.qcow2 2026-03-02 01:18:04.172018 | orchestrator | 2026-03-02 01:18:04 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260301.qcow2 2026-03-02 01:18:04.172030 | orchestrator | 2026-03-02 01:18:04 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260301.qcow2.CHECKSUM 2026-03-02 01:18:04.266464 | orchestrator | 2026-03-02 01:18:04 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/a6b4ba0debdc473ca1e410a73926d669/work/logs" 2026-03-02 01:18:38.643089 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a6b4ba0debdc473ca1e410a73926d669/work/artifacts" 2026-03-02 01:18:38.941899 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/a6b4ba0debdc473ca1e410a73926d669/work/docs" 2026-03-02 01:18:38.961864 | 2026-03-02 01:18:38.962010 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-02 01:18:39.899618 | orchestrator | changed: .d..t...... ./ 2026-03-02 01:18:39.899955 | orchestrator | changed: All items complete 2026-03-02 01:18:39.900011 | 2026-03-02 01:18:40.637733 | orchestrator | changed: .d..t...... ./ 2026-03-02 01:18:41.427241 | orchestrator | changed: .d..t...... ./ 2026-03-02 01:18:41.453793 | 2026-03-02 01:18:41.453945 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-02 01:18:41.491029 | orchestrator | skipping: Conditional result was False 2026-03-02 01:18:41.493888 | orchestrator | skipping: Conditional result was False 2026-03-02 01:18:41.517721 | 2026-03-02 01:18:41.517841 | PLAY RECAP 2026-03-02 01:18:41.517916 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-02 01:18:41.517954 | 2026-03-02 01:18:41.650981 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-02 01:18:41.653134 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-02 01:18:42.408085 | 2026-03-02 01:18:42.408262 | PLAY [Base post] 2026-03-02 01:18:42.423333 | 2026-03-02 01:18:42.423497 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-02 01:18:43.461926 | orchestrator | changed 2026-03-02 01:18:43.474798 | 2026-03-02 01:18:43.474977 | PLAY RECAP 2026-03-02 01:18:43.475064 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-02 01:18:43.475205 | 2026-03-02 01:18:43.596405 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-02 01:18:43.597497 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-02 01:18:44.396649 | 2026-03-02 01:18:44.396823 | PLAY [Base post-logs] 2026-03-02 01:18:44.407632 | 2026-03-02 01:18:44.407772 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-02 01:18:44.872484 | localhost | changed 2026-03-02 01:18:44.891179 | 2026-03-02 01:18:44.891447 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-02 01:18:44.930805 | localhost | ok 2026-03-02 01:18:44.937340 | 2026-03-02 01:18:44.937511 | TASK [Set zuul-log-path fact] 2026-03-02 01:18:44.956947 | localhost | ok 2026-03-02 01:18:44.974041 | 2026-03-02 01:18:44.974206 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-02 01:18:45.012947 | localhost | ok 2026-03-02 01:18:45.020277 | 2026-03-02 01:18:45.020516 | TASK [upload-logs : Create log directories] 2026-03-02 01:18:45.517466 | localhost | changed 2026-03-02 01:18:45.522266 | 2026-03-02 01:18:45.522495 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-02 01:18:46.018300 | localhost -> localhost | ok: Runtime: 0:00:00.006735 2026-03-02 01:18:46.026097 | 2026-03-02 01:18:46.026271 | TASK [upload-logs : Upload logs to log server] 2026-03-02 01:18:46.604985 | localhost | Output suppressed because no_log was given 2026-03-02 01:18:46.609613 | 2026-03-02 01:18:46.609858 | LOOP [upload-logs : Compress console log and json output] 2026-03-02 01:18:46.674043 | localhost | skipping: Conditional result was False 2026-03-02 01:18:46.679885 | localhost | skipping: Conditional result was False 2026-03-02 01:18:46.693114 | 2026-03-02 01:18:46.693338 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-02 01:18:46.749822 | localhost | skipping: Conditional result was False 2026-03-02 01:18:46.750586 | 2026-03-02 01:18:46.754513 | localhost | skipping: Conditional result was False 2026-03-02 01:18:46.767758 | 2026-03-02 01:18:46.767970 | LOOP [upload-logs : Upload console log and json output]